{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Class 6: Advanced `pandas`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Currently, `pandas`' `Series` and `DataFrame` might seem to us as no more than tables with complicated indexing methods. In this lesson, we will learn more about what makes `pandas` so powerful and how we can use it to write efficient and readable code." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "````{note}\n", "Some of the features described below only work with pandas >= 1.0.0. Make sure you have the latest pandas installation when running this notebook. To check the version of your pandas (or any other package), import it and print its `__version__` attribute:\n", "```python\n", ">>> import pandas as pd\n", ">>> print(pd.__version__)\n", "'1.2.0'\n", "```\n", "````" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Missing Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last question in the previous class pointed us to [working with missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html). But how and why do missing data occur?\n", "\n", "One option is pandas' index alignment, the property that makes sure that each value will have the same index throughout the entire computation process." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 NaN\n", "1 5.0\n", "2 9.0\n", "3 NaN\n", "dtype: float64" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "import numpy as np\n", "\n", "\n", "A = pd.Series([2, 4, 6], index=[0, 1, 2])\n", "B = pd.Series([1, 3, 5], index=[1, 2, 3])\n", "A + B" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The NaNs we have are what we call missing data, and this is how they are represented in pandas. We'll discuss that in more detail in a few moments.\n", "\n", "The same thing occurs with DataFrames:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0613
11810
\n", "
" ], "text/plain": [ " A B\n", "0 6 13\n", "1 18 10" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = pd.DataFrame(np.random.randint(0, 20, (2, 2)),\n", " columns=list('AB'))\n", "A" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
BAC
0063
1515
2114
\n", "
" ], "text/plain": [ " B A C\n", "0 0 6 3\n", "1 5 1 5\n", "2 1 1 4" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "B = pd.DataFrame(np.random.randint(0, 10, (3, 3)),\n", " columns=list('BAC'))\n", "B" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " A B C\n", "0 12.0 13.0 NaN\n", "1 19.0 15.0 NaN\n", "2 NaN NaN NaN\n", "\n", "Returned dtypes:\n", "A float64\n", "B float64\n", "C float64\n", "dtype: object\n" ] } ], "source": [ "new = A + B\n", "print(new)\n", "print(f\"\\nReturned dtypes:\\n{new.dtypes}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{note}\n", "Note how `new.dtypes` itself returns a `Series` of dtypes, with it's own `object` dtype.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dataframe's shape is the shape of the larger dataframe, and the \"extra\" row (index 2) was filled with NaNs. Since we have NaNs, the data type of the column is implicitly converted to a floating point type. To have integer dataframes with NaNs, we have to explicitly say we want them available. More on that later." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another way to introduce missing data is through reindexing. If we \"resample\" our data we can achieve the following:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
onetwothree
a0.264727-0.1625040.172157
c0.8775320.2322050.664195
e0.2936770.0732930.561750
f0.7742240.8568840.454558
h-0.1064300.5446361.407881
\n", "
" ], "text/plain": [ " one two three\n", "a 0.264727 -0.162504 0.172157\n", "c 0.877532 0.232205 0.664195\n", "e 0.293677 0.073293 0.561750\n", "f 0.774224 0.856884 0.454558\n", "h -0.106430 0.544636 1.407881" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],\n", " columns=['one', 'two', 'three'])\n", "df" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
onetwothree
a0.264727-0.1625040.172157
bNaNNaNNaN
c0.8775320.2322050.664195
dNaNNaNNaN
e0.2936770.0732930.561750
f0.7742240.8568840.454558
gNaNNaNNaN
h-0.1064300.5446361.407881
\n", "
" ], "text/plain": [ " one two three\n", "a 0.264727 -0.162504 0.172157\n", "b NaN NaN NaN\n", "c 0.877532 0.232205 0.664195\n", "d NaN NaN NaN\n", "e 0.293677 0.073293 0.561750\n", "f 0.774224 0.856884 0.454558\n", "g NaN NaN NaN\n", "h -0.106430 0.544636 1.407881" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])\n", "df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But what is `NaN`? Is it the same as `None`? To better answer the former, let's first have a closer look at the latter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The `None` object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`None` is the standard null value in Python, and is used extensively in normal usage of the language. For example, functions that don't have a `return` statement, implicitly return `None`. While `None` can be used as a missing data type, it's probably not the best choice." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1, None, 3, 4], dtype=object)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vals1 = np.array([1, None, 3, 4])\n", "vals1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `dtype` is `object`, because the best common type of `int`s and a `None` is a Python `object`. This slows down computation time on these arrays:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dtype = object\n", "54.7 ms ± 3.97 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n", "\n", "dtype = int\n", "2.12 ms ± 289 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n", "\n" ] } ], "source": [ "for dtype in ['object', 'int']:\n", " print(\"dtype =\", dtype)\n", " %timeit np.arange(1E6, dtype=dtype).sum()\n", " print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you recall from a couple of lessons ago, the performance of `object` arrays is very similar to that of standard lists (generally speaking, the two data structures are effectively identical)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another thing we can't do is aggregation:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "ename": "TypeError", "evalue": "unsupported operand type(s) for +: 'int' and 'NoneType'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mvals1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m~/.local/lib/python3.8/site-packages/numpy/core/_methods.py\u001b[0m in \u001b[0;36m_sum\u001b[0;34m(a, axis, dtype, out, keepdims, initial, where)\u001b[0m\n\u001b[1;32m 36\u001b[0m def _sum(a, axis=None, dtype=None, out=None, keepdims=False,\n\u001b[1;32m 37\u001b[0m initial=_NoValue, where=True):\n\u001b[0;32m---> 38\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mumr_sum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0maxis\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mout\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkeepdims\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minitial\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwhere\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 39\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 40\u001b[0m def _prod(a, axis=None, dtype=None, out=None, keepdims=False,\n", "\u001b[0;31mTypeError\u001b[0m: unsupported operand type(s) for +: 'int' and 'NoneType'" ] } ], "source": [ "vals1.sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The `NaN` value" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`NaN` is a special floating-point value recognized by all programming languages that conform to the IEEE standard (which means most of them). As we mentioned before, it forces the entire array to have a floating point type:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vals2 = np.array([1, np.nan, 3, 4])\n", "vals2.dtype" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Creating floating point arrays is very fast, so performance isn't hindered. NaN is sometimes described as a \"data virus\", since it infects objects it touches:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "1 + np.nan" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "0 * np.nan" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vals2.sum(), vals2.min(), vals2.max()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.nan == np.nan" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Numpy has `nan`-aware counterparts to many of its aggregation functions, which can work with NaNs correctly. They usually have the same name as their non-NaN sibling, but with the \"nan\" prefix:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(np.nansum(vals2))\n", "print(np.nanmean(vals2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, pandas objects account for NaNs in their calculations, as we'll soon see.\n", "\n", "Pandas can handle both `NaN` and `None` interchangeably:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ser = pd.Series([1, np.nan, 2, None])\n", "ser" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The `NaT` value\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When dealing with datetime values or indices, the missing value is represented as `NaT`, or not-a-time:\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df['timestamp'] = pd.Timestamp('20180101')\n", "df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])\n", "df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Operations and calculations with missing data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = pd.DataFrame(np.random.random((5, 2)), columns=['one', 'two'])\n", "a.iloc[1, 1] = np.nan\n", "a" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b = pd.DataFrame(np.random.random((6, 3)), columns=['one', 'two', 'three'])\n", "b.iloc[2, 2] = np.nan\n", "b" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a + b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we see, missing values propagate naturally through these arithmetic operations. Statistics also works:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(a + b).describe()\n", "# Summation - NaNs are zero.\n", "# If everything is NaN - the result is NaN as well.\n", "# pandas' cumsum and cumprod ignore NaNs but preserve them in the resulting arrays." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also receive a boolean mask of the NaNs in a dataframe:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mask = (a + b).isnull() # also isna(), and the opposite .notnull()\n", "mask" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Filling missing values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The simplest option is to use the `fillna` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "summed = a + b\n", "summed.iloc[4, 0] = np.nan\n", "summed" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "summed.fillna(0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "summed.fillna('missing') # changed dtype to \"object\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "summed.fillna(method='pad') # The NaN column remained the same, but values were propagated forward\n", "# We can also use the \"backfill\" method to fill in values to the back" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "summed.fillna(method='pad', limit=1) # No more than one padded NaN in a row" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'summed' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0msummed\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfillna\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msummed\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmean\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# each column received its respective mean. The NaN column is untouched.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mNameError\u001b[0m: name 'summed' is not defined" ] } ], "source": [ "summed.fillna(summed.mean()) # each column received its respective mean. The NaN column is untouched." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Dropping missing values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've already seen in the short exercise the `dropna` method, that allows us to drop missing values:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'summed' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0msummed\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mNameError\u001b[0m: name 'summed' is not defined" ] } ], "source": [ "summed" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'summed' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mfilled\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0msummed\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfillna\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msummed\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmean\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0mfilled\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mNameError\u001b[0m: name 'summed' is not defined" ] } ], "source": [ "filled = summed.fillna(summed.mean())\n", "filled" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'filled' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mfilled\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdropna\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# each column containing NaN is dropped\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mNameError\u001b[0m: name 'filled' is not defined" ] } ], "source": [ "filled.dropna(axis=1) # each column containing NaN is dropped" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'filled' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mfilled\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdropna\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# each row containing a NaN is dropped\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mNameError\u001b[0m: name 'filled' is not defined" ] } ], "source": [ "filled.dropna(axis=0) # each row containing a NaN is dropped" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Interpolation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last way to to fill in missing values is through [interpolation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html).\n", "\n", "The default interpolation methods perform linear interpolation on the data, based on its ordinal index:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'summed' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0msummed\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mNameError\u001b[0m: name 'summed' is not defined" ] } ], "source": [ "summed" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'summed' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0msummed\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minterpolate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# notice all the details in the interpolation of the three columns\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mNameError\u001b[0m: name 'summed' is not defined" ] } ], "source": [ "summed.interpolate() # notice all the details in the interpolation of the three columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also interpolate with the actual index values in mind:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0
2018-01-011.0
2018-01-04NaN
2018-01-055.0
2018-01-07NaN
2018-01-088.0
\n", "
" ], "text/plain": [ " 0\n", "2018-01-01 1.0\n", "2018-01-04 NaN\n", "2018-01-05 5.0\n", "2018-01-07 NaN\n", "2018-01-08 8.0" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Create \"missing\" index\n", "timeindex = pd.Series(['1/1/2018', '1/4/2018', '1/5/2018', '1/7/2018', '1/8/2018'])\n", "timeindex = pd.to_datetime(timeindex)\n", "data_to_interp = [1, np.nan, 5, np.nan, 8]\n", "df_to_interp = pd.DataFrame(data_to_interp, index=timeindex)\n", "df_to_interp" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0
2018-01-011.0
2018-01-043.0
2018-01-055.0
2018-01-076.5
2018-01-088.0
\n", "
" ], "text/plain": [ " 0\n", "2018-01-01 1.0\n", "2018-01-04 3.0\n", "2018-01-05 5.0\n", "2018-01-07 6.5\n", "2018-01-08 8.0" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_to_interp.interpolate() # the index values aren't taken into account" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0
2018-01-011.0
2018-01-044.0
2018-01-055.0
2018-01-077.0
2018-01-088.0
\n", "
" ], "text/plain": [ " 0\n", "2018-01-01 1.0\n", "2018-01-04 4.0\n", "2018-01-05 5.0\n", "2018-01-07 7.0\n", "2018-01-08 8.0" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_to_interp.interpolate(method='index') # notice how the data obtains the \"right\" values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pandas has many other interpolation methods, based on SciPy's. " ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
01.00.25
12.1NaN
2NaNNaN
34.74.00
45.612.20
56.814.40
\n", "
" ], "text/plain": [ " A B\n", "0 1.0 0.25\n", "1 2.1 NaN\n", "2 NaN NaN\n", "3 4.7 4.00\n", "4 5.6 12.20\n", "5 6.8 14.40" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_inter_2 = pd.DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],\n", " 'B': [.25, np.nan, np.nan, 4, 12.2, 14.4]})\n", "df_inter_2" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
01.0000000.250000
12.100000-2.703846
23.451351-1.453846
34.7000004.000000
45.60000012.200000
56.80000014.400000
\n", "
" ], "text/plain": [ " A B\n", "0 1.000000 0.250000\n", "1 2.100000 -2.703846\n", "2 3.451351 -1.453846\n", "3 4.700000 4.000000\n", "4 5.600000 12.200000\n", "5 6.800000 14.400000" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_inter_2.interpolate(method='polynomial', order=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Missing Values in Non-Float Columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Starting from pandas v1.0.0 pandas gained support for NaN values in non-float columns. This feature is a bit experimental currently, so the default behavior still converts integers to floats for example, but the support is there if you know where to look. By default:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "0 1.0\n", "1 2.0\n", "2 NaN\n", "3 4.0\n", "dtype: float64" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nanint = pd.Series([1, 2, np.nan, 4])\n", "nanint # the result has a dtype of float64 even though all numbers are integers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can try to force pandas' hand here, but it won't work:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "scrolled": true }, "outputs": [ { "ename": "ValueError", "evalue": "cannot convert float NaN to integer", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mnanint\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpd\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSeries\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnan\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"int32\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/series.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, data, index, dtype, name, copy, fastpath)\u001b[0m\n\u001b[1;32m 312\u001b[0m \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdata\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcopy\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 313\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 314\u001b[0;31m \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0msanitize_array\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mindex\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcopy\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mraise_cast_failure\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 315\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 316\u001b[0m \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mSingleBlockManager\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mindex\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfastpath\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/internals/construction.py\u001b[0m in \u001b[0;36msanitize_array\u001b[0;34m(data, index, dtype, copy, raise_cast_failure)\u001b[0m\n\u001b[1;32m 677\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mdtype\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 678\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 679\u001b[0;31m \u001b[0msubarr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_try_cast\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcopy\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mraise_cast_failure\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 680\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 681\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mraise_cast_failure\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;31m# pragma: no cover\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/internals/construction.py\u001b[0m in \u001b[0;36m_try_cast\u001b[0;34m(arr, dtype, copy, raise_cast_failure)\u001b[0m\n\u001b[1;32m 780\u001b[0m \u001b[0;31m# that we can convert the data to the requested dtype.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 781\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mis_integer_dtype\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdtype\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 782\u001b[0;31m \u001b[0msubarr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmaybe_cast_to_integer_array\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0marr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 783\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 784\u001b[0m \u001b[0msubarr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmaybe_cast_to_datetime\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0marr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/dtypes/cast.py\u001b[0m in \u001b[0;36mmaybe_cast_to_integer_array\u001b[0;34m(arr, dtype, copy)\u001b[0m\n\u001b[1;32m 1349\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1350\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mhasattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0marr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"astype\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1351\u001b[0;31m \u001b[0mcasted\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0marr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcopy\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcopy\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1352\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1353\u001b[0m \u001b[0mcasted\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0marr\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mastype\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcopy\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcopy\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mValueError\u001b[0m: cannot convert float NaN to integer" ] } ], "source": [ "nanint = pd.Series([1, 2, np.nan, 4], dtype=\"int32\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To our rescue comes the new `pd.Int32Dtype`:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "0 1\n", "1 2\n", "2 NaN\n", "3 4\n", "dtype: Int32" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nanint = pd.Series([1, 2, np.nan, 4], dtype=\"Int32\")\n", "nanint" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It worked! We have a series with integers and a missing value! Notice the changes we had to made:\n", "1. The `NaN` is `` now. It's actually a new type of `NaN` called `pd.NA`.\n", "2. The data type had to be mentioned explictly, meaning that the conversion will work only if we know in advance that we'll have NA values.\n", "3. The data type is `Int32`. It's CamelCase and it's actually a class underneath. Standard datatypes are lowercase.\n", "\n", "Caveats aside, this is definitely useful for scientists who sometimes have integer values and do not want to convert them to float to supports NAs." ] }, { "cell_type": "code", "execution_count": 73, "metadata": { "scrolled": false, "tags": [ "remove-input", "remove-output" ] }, "outputs": [ { "ename": "ModuleNotFoundError", "evalue": "No module named 'myst_nb'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mmatplotlib\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpyplot\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mplt\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mmyst_nb\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mglue\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mn_cycles\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m10\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mn_samples\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m10000\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'myst_nb'" ] } ], "source": [ "import matplotlib.pyplot as plt\n", "from myst_nb import glue\n", "\n", "n_cycles = 10\n", "n_samples = 10000\n", "amplitude = 3\n", "phase = np.pi / 4\n", "end = 2 * np.pi * n_cycles\n", "x = np.linspace(0, end, num=n_samples)\n", "y = amplitude * np.sin(x + phase)\n", "\n", "chosen_idx = np.random.choice(n_samples, size=100, replace=False)\n", "data = pd.DataFrame(np.nan, index=x, columns=['raw'])\n", "data.iloc[chosen_idx, 0] = y[chosen_idx]\n", "\n", "# plotting\n", "fig1, ax1 = plt.subplots()\n", "ax1.set_title('Raw Data')\n", "data.raw.plot(marker='o', ax=ax1)\n", "data['lin_inter'] = data.raw.interpolate(method='index')\n", "fig2, ax2 = plt.subplots()\n", "ax2.set_title('Linear Interpolation')\n", "data.lin_inter.plot(marker='o', ax=ax2)\n", "data['quad_inter'] = data.raw.interpolate(method='quadratic')\n", "fig3, ax3 = plt.subplots()\n", "ax3.set_title('Quadratic Interpolation')\n", "data.quad_inter.plot(marker='o', ax=ax3)\n", "\n", "glue(\"fig1\", fig1, display=False)\n", "glue(\"fig2\", fig2, display=False)\n", "glue(\"fig3\", fig3, display=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`````{admonition} Exercise: Missing Data\n", "* Create a vector of 10000 measurements from a 10-cycle sinus wave. Remember that a single period of sine starts at 0 and ends at 2$\\pi$, so 10 periods span between 0 and 20$\\pi$.\n", "````{dropdown} Solution\n", "```python\n", "n_cycles = 10\n", "n_samples = 10000\n", "amplitude = 3\n", "phase = np.pi / 4\n", "end = 2 * np.pi * n_cycles\n", "x = np.linspace(0, end, num=n_samples)\n", "y = amplitude * np.sin(x + phase)\n", "```\n", "````\n", "* Using `np.random.choice(replace=False)` sample 100 points from the wave and place them in a Series.\n", "````{dropdown} Solution\n", "```python\n", "chosen_idx = np.random.choice(n_samples, size=100, replace=False)\n", "data = pd.DataFrame(np.nan, index=x, columns=['raw'])\n", "data.iloc[chosen_idx, 0] = y[chosen_idx]\n", "```\n", "````\n", "* Plot the chosen points.\n", "````{dropdown} Solution\n", "```python\n", "fig1, ax1 = plt.subplots()\n", "ax1.set_title('Raw data pre-interpolation')\n", "data.raw.plot(marker='o', ax=ax1)\n", "```\n", "```{glue:figure} fig1\n", " :figwidth: 500px\n", "```\n", "````\n", "* Interpolate the points using linear interpolation and plot them on a different graph.\n", "````{dropdown} Solution\n", "```python\n", "data['lin_inter'] = data.raw.interpolate(method='index')\n", "fig2, ax2 = plt.subplots()\n", "ax2.set_title('Linear interpolation')\n", "data.lin_inter.plot(marker='o', ax=ax2)\n", "```\n", "```{glue:figure} fig2\n", " :figwidth: 500px\n", "```\n", "````\n", "* Interpolate the points using quadratic interpolation and plot them on a different graph. \n", "````{dropdown} Solution\n", "```python\n", "data['quad_inter'] = data.raw.interpolate(method='quadratic')\n", "fig3, ax3 = plt.subplots()\n", "ax3.set_title('Quadratic interpolation')\n", "data.quad_inter.plot(marker='o', ax=ax3)\n", "```\n", "```{glue:figure} fig3\n", " :figwidth: 500px\n", "```\n", "````\n", "`````" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Categorical Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far, we've used examples with quantitative data. Let's now have a look at [categorical data](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html), i.e. data can only have one of a specific set, or categories, of values. For example, if we have a column which marks the weekday, then it can obviously only be one of seven options. Same for boolean data, colors, and other examples. These data columns should be marked as \"categorical\" to reduce memory consumption and improve performance. It also tells the code readers more about the nature of that data column." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The easiest way to create a categorical variable is to declare it as such, or to convert as existing column to a categorical data type:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 a\n", "1 b\n", "2 c\n", "3 a\n", "dtype: category\n", "Categories (3, object): [a, b, c]" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s = pd.Series([\"a\", \"b\", \"c\", \"a\"], dtype=\"category\")\n", "s" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "DataFrame:\n", " A B\n", "0 a a\n", "1 b b\n", "2 c c\n", "3 a a\n", "\n", "Data types:\n", "A object\n", "B category\n", "dtype: object\n" ] } ], "source": [ "df = pd.DataFrame({\"A\": [\"a\", \"b\", \"c\", \"a\"]})\n", "df[\"B\"] = df[\"A\"].astype(\"category\")\n", "print(f\"DataFrame:\\n{df}\")\n", "print(f\"\\nData types:\\n{df.dtypes}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also force order between our categories, or force specific categories on our data, using the special CategoricalDtype (which we won't show).\n", "\n", "As we said, memory usage is reduced when working with categorical data:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ab
00.097058a
10.484870a
20.200204a
30.547484a
40.439440a
.........
99950.080886a
99960.432402a
99970.873489a
99980.512701a
99990.953598a
\n", "

10000 rows × 2 columns

\n", "
" ], "text/plain": [ " a b\n", "0 0.097058 a\n", "1 0.484870 a\n", "2 0.200204 a\n", "3 0.547484 a\n", "4 0.439440 a\n", "... ... ..\n", "9995 0.080886 a\n", "9996 0.432402 a\n", "9997 0.873489 a\n", "9998 0.512701 a\n", "9999 0.953598 a\n", "\n", "[10000 rows x 2 columns]" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_obj = pd.DataFrame({'a': np.random.random(10_000), 'b': ['a'] * 10_000})\n", "df_obj" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ab
00.097058a
10.484870a
20.200204a
30.547484a
40.439440a
.........
99950.080886a
99960.432402a
99970.873489a
99980.512701a
99990.953598a
\n", "

10000 rows × 2 columns

\n", "
" ], "text/plain": [ " a b\n", "0 0.097058 a\n", "1 0.484870 a\n", "2 0.200204 a\n", "3 0.547484 a\n", "4 0.439440 a\n", "... ... ..\n", "9995 0.080886 a\n", "9996 0.432402 a\n", "9997 0.873489 a\n", "9998 0.512701 a\n", "9999 0.953598 a\n", "\n", "[10000 rows x 2 columns]" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_cat = pd.DataFrame({'a': df_obj['a'], 'b': df_obj['b'].astype('category')})\n", "df_cat" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index 128\n", "a 80000\n", "b 80000\n", "dtype: int64" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_obj.memory_usage()" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "Index 128\n", "a 80000\n", "b 10088\n", "dtype: int64" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_cat.memory_usage()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A factor of 8 in memory reduction." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hierarchical Indexing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Last time we mentioned that while a DataFrame is inherently a 2D object, it can contain multi-dimensional data. The way a DataFrame (and a Series) does that is with [hierarchical indexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html), or sometimes Multi-Indexing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simple Example: Temperature in a Grid" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, our data is the temperature sampled across a 2-dimensional grid. First, we need to generate the required set of indices, $(x, y)$, which point to a specific location inside the square. These coordinates can then be assigned the designated temperature values. A list of such coordinates can be a simple `Series`:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(r0, c0) 1.20\n", "(r0, c1) 0.80\n", "(r0, c2) 3.10\n", "(r1, c0) 0.10\n", "(r1, c1) 0.05\n", "(r1, c2) 1.00\n", "(r2, c0) 1.40\n", "(r2, c1) 2.10\n", "(r2, c2) 2.90\n", "Name: temperature, dtype: float64" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "values = np.array([1.2, 0.8, 3.1, 0.1, 0.05, 1, 1.4, 2.1, 2.9])\n", "coords = [('r0', 'c0'), ('r0', 'c1'), ('r0', 'c2'), \n", " ('r1', 'c0'), ('r1', 'c1'), ('r1', 'c2'), \n", " ('r2', 'c0'), ('r2', 'c1'), ('r2', 'c2')] # r is row, c is column\n", "points = pd.Series(values, index=coords, name='temperature')\n", "points" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is important we understand that this is a series because _the data is one-dimensional_. The actual data is contained in that rightmost column, a one-dimensional array. We do have two coordinates for each point, but the data itself, the temperature, is one-dimensional.\n", "\n", "Currently, the index is a simple tuple of coordinates. It's a single column, containing tuples. Pandas can help us to index this data in a more intuitive manner, using a MultiIndex object." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "MultiIndex([('r0', 'c0'),\n", " ('r0', 'c1'),\n", " ('r0', 'c2'),\n", " ('r1', 'c0'),\n", " ('r1', 'c1'),\n", " ('r1', 'c2'),\n", " ('r2', 'c0'),\n", " ('r2', 'c1'),\n", " ('r2', 'c2')],\n", " )" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mindex = pd.MultiIndex.from_tuples(coords)\n", "mindex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We received something which looks quite similar to the list of tuples we had before, but it's a [`MultiIndex`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.html) instance. Let's see how it helps us by `reindex`ing our data with it:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "r0 c0 1.20\n", " c1 0.80\n", " c2 3.10\n", "r1 c0 0.10\n", " c1 0.05\n", " c2 1.00\n", "r2 c0 1.40\n", " c1 2.10\n", " c2 2.90\n", "Name: temperature, dtype: float64" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "points = points.reindex(mindex)\n", "points" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This looks good. Each index level is represented by a column, with the data being the last one. The \"missing\" values indicate that the value in that cell is the same as the value above it.\n", "\n", "You might have assumed that accessing the data now is much more intuitive. Let's look at the values of all the points in the first row, `r0`:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "r0 c0 1.2\n", " c1 0.8\n", " c2 3.1\n", "Name: temperature, dtype: float64" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "points.loc['r0', :] # .loc() is label-based indexing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or the values of points in the second column:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "r0 0.80\n", "r1 0.05\n", "r2 2.10\n", "Name: temperature, dtype: float64" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "points.loc[:, 'c1']" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "r0 c0 1.20\n", " c1 0.80\n", " c2 3.10\n", "r1 c0 0.10\n", " c1 0.05\n", " c2 1.00\n", "r2 c0 1.40\n", " c1 2.10\n", " c2 2.90\n", "Name: temperature, dtype: float64" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "points.loc[:, :] # all values - each level of the index has its own colon (:)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that `.iloc` disregards the MultiIndex, treating our data as a simple one-dimensional vector (as it actually is):" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1.4" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "points.iloc[6]\n", "# points.iloc[0, 1] # ERRORS" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Besides making the syntax cleaner, these slicing operations are as efficient as their single-dimension counterparts." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It should be clear that a MultiIndex can have more than two levels. Modelling a 3D cube (with the temperatures inside it) is as easy as:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "r0 c0 z0 1.20\n", " z1 0.80\n", " c1 z0 3.10\n", " z1 0.10\n", "r1 c0 z0 0.05\n", " z1 1.00\n", " c1 z0 1.40\n", " z1 2.10\n", "r2 c0 z0 2.90\n", " z1 0.30\n", " c1 z0 2.40\n", " z1 1.90\n", "Name: temp_cube, dtype: float64" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "values3d = np.array([1.2, 0.8, \n", " 3.1, 0.1, \n", " 0.05, 1, \n", " 1.4, 2.1, \n", " 2.9, 0.3,\n", " 2.4, 1.9])\n", "# 3D coordinates with a shape of (r, c, z) = (3, 2, 2)\n", "coords3d = [('r0', 'c0', 'z0'), ('r0', 'c0', 'z1'), \n", " ('r0', 'c1', 'z0'), ('r0', 'c1', 'z1'),\n", " ('r1', 'c0', 'z0'), ('r1', 'c0', 'z1'),\n", " ('r1', 'c1', 'z0'), ('r1', 'c1', 'z1'), \n", " ('r2', 'c0', 'z0'), ('r2', 'c0', 'z1'),\n", " ('r2', 'c1', 'z0'), ('r2', 'c1', 'z1')] # we'll soon see an easier way to create this index\n", "cube = pd.Series(values3d, index=pd.MultiIndex.from_tuples(coords3d), name='temp_cube')\n", "cube" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can even name the individual levels, which helps with some slicing operations we'll see below:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "x y z \n", "r0 c0 z0 1.20\n", " z1 0.80\n", " c1 z0 3.10\n", " z1 0.10\n", "r1 c0 z0 0.05\n", " z1 1.00\n", " c1 z0 1.40\n", " z1 2.10\n", "r2 c0 z0 2.90\n", " z1 0.30\n", " c1 z0 2.40\n", " z1 1.90\n", "Name: temp_cube, dtype: float64" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cube.index.names = ['x', 'y', 'z']\n", "cube" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, you have to remember that this is one-dimensional data, with a three-dimensional index. In statistical term, we might term the indices a fixed, independent categorical variable, while the values are the dependent variable. Pandas actually has a [`CategoricalIndex`](https://pandas.pydata.org/docs/reference/api/pandas.CategoricalIndex.html) object which you'll meet in one of your future homework assignments (but don't be afraid to hit the link and check it out on your own if you just can't wait)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### More on extra dimensions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous square example, it's very appealing to ditch the MultiIndex altogether and just work with a dataframe, or even a simple NumPy array. This is because the two indices represented rows and columns. A quick way to turn one representation into the other is the [`stack()`\\\\`unstack()`](https://pandas.pydata.org/docs/user_guide/reshaping.html) method:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "rows columns\n", "r0 c0 1.20\n", " c1 0.80\n", " c2 3.10\n", "r1 c0 0.10\n", " c1 0.05\n", " c2 1.00\n", "r2 c0 1.40\n", " c1 2.10\n", " c2 2.90\n", "Name: temperature, dtype: float64" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "points.index.names = ['rows', 'columns']\n", "points" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
columnsc0c1c2
rows
r01.20.803.1
r10.10.051.0
r21.42.102.9
\n", "
" ], "text/plain": [ "columns c0 c1 c2\n", "rows \n", "r0 1.2 0.80 3.1\n", "r1 0.1 0.05 1.0\n", "r2 1.4 2.10 2.9" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pts_df = points.unstack()\n", "pts_df" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "rows columns\n", "r0 c0 1.20\n", " c1 0.80\n", " c2 3.10\n", "r1 c0 0.10\n", " c1 0.05\n", " c2 1.00\n", "r2 c0 1.40\n", " c1 2.10\n", " c2 2.90\n", "dtype: float64" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pts_df.stack() # back to a series" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to turn the indices into \"real\" columns, we can use the `reset_index()` method:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
rowscolumnstemperature
0r0c01.20
1r0c10.80
2r0c23.10
3r1c00.10
4r1c10.05
5r1c21.00
6r2c01.40
7r2c12.10
8r2c22.90
\n", "
" ], "text/plain": [ " rows columns temperature\n", "0 r0 c0 1.20\n", "1 r0 c1 0.80\n", "2 r0 c2 3.10\n", "3 r1 c0 0.10\n", "4 r1 c1 0.05\n", "5 r1 c2 1.00\n", "6 r2 c0 1.40\n", "7 r2 c1 2.10\n", "8 r2 c2 2.90" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pts_df_reset = points.reset_index()\n", "pts_df_reset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So why bother with these (you haven't seen nothing yet) complicated multi-indices?\n", "\n", "As you might have guessed, adding data points, i.e. increasing the dimensionality of the data, is very easy and intuitive. Data remains aligned through addition and deletion of data. Moreover, treating these categorical variables as an index can help the mental modeling of the problem, especially when you wish to perform statistical modeling with your analysis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Constructing a MultiIndex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Creating a hierarchical index can be done in several ways:" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "MultiIndex([('a', 1),\n", " ('a', 2),\n", " ('b', 1),\n", " ('b', 2)],\n", " )" ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.MultiIndex.from_arrays([['a', 'a', 'b', 'b'], [1, 2, 1, 2]])" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "MultiIndex([('a', 1),\n", " ('a', 2),\n", " ('b', 1),\n", " ('b', 2)],\n", " )" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)])" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "MultiIndex([('a', 1),\n", " ('a', 2),\n", " ('b', 1),\n", " ('b', 2)],\n", " )" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.MultiIndex.from_product([['a', 'b'], [1, 2]]) # Cartesian product" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most common way to construct a MultiIndex, though, is to add to the existing index one of the columns of the dataframe. We'll see how it's done below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another important note is that with dataframes, the column and row index is symmetric. In effect this means that the columns could also contain a MultiIndex:" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
subjectBobGuidoSue
typeHRTempHRTempHRTemp
yearvisit
2013151.036.314.037.921.037.0
247.037.535.037.825.036.7
2014133.037.349.036.339.037.4
233.036.532.035.929.035.7
\n", "
" ], "text/plain": [ "subject Bob Guido Sue \n", "type HR Temp HR Temp HR Temp\n", "year visit \n", "2013 1 51.0 36.3 14.0 37.9 21.0 37.0\n", " 2 47.0 37.5 35.0 37.8 25.0 36.7\n", "2014 1 33.0 37.3 49.0 36.3 39.0 37.4\n", " 2 33.0 36.5 32.0 35.9 29.0 35.7" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index = pd.MultiIndex.from_product([[2013, 2014], [1, 2]],\n", " names=['year', 'visit'])\n", "columns = pd.MultiIndex.from_product([['Bob', 'Guido', 'Sue'], ['HR', 'Temp']],\n", " names=['subject', 'type'])\n", "\n", "# mock some data\n", "data = np.round(np.random.randn(4, 6), 1)\n", "data[:, ::2] *= 10\n", "data += 37\n", "\n", "# create the DataFrame\n", "health_data = pd.DataFrame(data, index=index, columns=columns)\n", "health_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This sometimes might seem too much, and so usually people prefer to keep the column index as a simple list of names, moving any nestedness to the row index. This is due to the fact that usually columns represent the measured dependent variable." ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
HRTemp
yearvisitsubject
20131Bob16.037.7
Guido28.038.2
Sue45.036.7
2Bob46.037.9
Guido45.036.6
Sue25.036.6
20141Bob28.035.8
Guido30.036.9
Sue39.037.0
2Bob26.037.6
Guido22.037.0
Sue64.036.9
\n", "
" ], "text/plain": [ " HR Temp\n", "year visit subject \n", "2013 1 Bob 16.0 37.7\n", " Guido 28.0 38.2\n", " Sue 45.0 36.7\n", " 2 Bob 46.0 37.9\n", " Guido 45.0 36.6\n", " Sue 25.0 36.6\n", "2014 1 Bob 28.0 35.8\n", " Guido 30.0 36.9\n", " Sue 39.0 37.0\n", " 2 Bob 26.0 37.6\n", " Guido 22.0 37.0\n", " Sue 64.0 36.9" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index = pd.MultiIndex.from_product([[2013, 2014], [1, 2], ['Bob', 'Guido', 'Sue']],\n", " names=['year', 'visit', 'subject'])\n", "columns = ['HR', 'Temp']\n", "\n", "# mock some data\n", "data = np.round(np.random.randn(12, 2), 1)\n", "data[:, ::2] *= 10\n", "data += 37\n", "\n", "# create the DataFrame\n", "health_data_row = pd.DataFrame(data, index=index, columns=columns)\n", "health_data_row" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Creating a MultiIndex from a data column" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While all of the above methods work, and could be useful sometimes, the most common method of creating an index is from an existing data column. " ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
locationdaytemphumidity
0ALSUN12.331
1ALSUN14.145
2NYTUE21.341
3NYWED20.941
4NYSAT18.849
5VASAT16.552
\n", "
" ], "text/plain": [ " location day temp humidity\n", "0 AL SUN 12.3 31\n", "1 AL SUN 14.1 45\n", "2 NY TUE 21.3 41\n", "3 NY WED 20.9 41\n", "4 NY SAT 18.8 49\n", "5 VA SAT 16.5 52" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "location = ['AL', 'AL', 'NY', 'NY', 'NY', 'VA']\n", "day = ['SUN', 'SUN', 'TUE', 'WED', 'SAT', 'SAT']\n", "temp = [12.3, 14.1, 21.3, 20.9, 18.8, 16.5]\n", "humidity = [31, 45, 41, 41, 49, 52]\n", "states = pd.DataFrame(dict(location=location, day=day, \n", " temp=temp, humidity=humidity))\n", "states" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
locationtemphumidity
day
SUNAL12.331
SUNAL14.145
TUENY21.341
WEDNY20.941
SATNY18.849
SATVA16.552
\n", "
" ], "text/plain": [ " location temp humidity\n", "day \n", "SUN AL 12.3 31\n", "SUN AL 14.1 45\n", "TUE NY 21.3 41\n", "WED NY 20.9 41\n", "SAT NY 18.8 49\n", "SAT VA 16.5 52" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.set_index(['day'])" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
temphumidity
daylocation
SUNAL12.331
AL14.145
TUENY21.341
WEDNY20.941
SATNY18.849
VA16.552
\n", "
" ], "text/plain": [ " temp humidity\n", "day location \n", "SUN AL 12.3 31\n", " AL 14.1 45\n", "TUE NY 21.3 41\n", "WED NY 20.9 41\n", "SAT NY 18.8 49\n", " VA 16.5 52" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.set_index(['day', 'location'])" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
temphumidity
daylocation
0SUNAL12.331
1SUNAL14.145
2TUENY21.341
3WEDNY20.941
4SATNY18.849
5SATVA16.552
\n", "
" ], "text/plain": [ " temp humidity\n", " day location \n", "0 SUN AL 12.3 31\n", "1 SUN AL 14.1 45\n", "2 TUE NY 21.3 41\n", "3 WED NY 20.9 41\n", "4 SAT NY 18.8 49\n", "5 SAT VA 16.5 52" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.set_index(['day', 'location'], append=True)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
locationtemphumidity
day
iSUNAL12.331
iiSUNAL14.145
iiiTUENY21.341
ivWEDNY20.941
vSATNY18.849
viSATVA16.552
\n", "
" ], "text/plain": [ " location temp humidity\n", " day \n", "i SUN AL 12.3 31\n", "ii SUN AL 14.1 45\n", "iii TUE NY 21.3 41\n", "iv WED NY 20.9 41\n", "v SAT NY 18.8 49\n", "vi SAT VA 16.5 52" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.set_index([['i', 'ii', 'iii', 'iv', 'v', 'vi'], 'day'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Indexing and Slicing a MultiIndex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use these dataframes as an example:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
subjectBobGuidoSue
typeHRTempHRTempHRTemp
yearvisit
2013151.036.314.037.921.037.0
247.037.535.037.825.036.7
2014133.037.349.036.339.037.4
233.036.532.035.929.035.7
\n", "
" ], "text/plain": [ "subject Bob Guido Sue \n", "type HR Temp HR Temp HR Temp\n", "year visit \n", "2013 1 51.0 36.3 14.0 37.9 21.0 37.0\n", " 2 47.0 37.5 35.0 37.8 25.0 36.7\n", "2014 1 33.0 37.3 49.0 36.3 39.0 37.4\n", " 2 33.0 36.5 32.0 35.9 29.0 35.7" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
HRTemp
yearvisitsubject
20131Bob16.037.7
Guido28.038.2
Sue45.036.7
2Bob46.037.9
Guido45.036.6
Sue25.036.6
20141Bob28.035.8
Guido30.036.9
Sue39.037.0
2Bob26.037.6
Guido22.037.0
Sue64.036.9
\n", "
" ], "text/plain": [ " HR Temp\n", "year visit subject \n", "2013 1 Bob 16.0 37.7\n", " Guido 28.0 38.2\n", " Sue 45.0 36.7\n", " 2 Bob 46.0 37.9\n", " Guido 45.0 36.6\n", " Sue 25.0 36.6\n", "2014 1 Bob 28.0 35.8\n", " Guido 30.0 36.9\n", " Sue 39.0 37.0\n", " 2 Bob 26.0 37.6\n", " Guido 22.0 37.0\n", " Sue 64.0 36.9" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data_row" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all we wish to do is to examine a column, indexing is very easy. Don't forget the dataframe as dictionary analogy:" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
typeHRTemp
yearvisit
2013114.037.9
235.037.8
2014149.036.3
232.035.9
\n", "
" ], "text/plain": [ "type HR Temp\n", "year visit \n", "2013 1 14.0 37.9\n", " 2 35.0 37.8\n", "2014 1 49.0 36.3\n", " 2 32.0 35.9" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data['Guido'] # works for the column MultiIndex as expected" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "year visit subject\n", "2013 1 Bob 16.0\n", " Guido 28.0\n", " Sue 45.0\n", " 2 Bob 46.0\n", " Guido 45.0\n", " Sue 25.0\n", "2014 1 Bob 28.0\n", " Guido 30.0\n", " Sue 39.0\n", " 2 Bob 26.0\n", " Guido 22.0\n", " Sue 64.0\n", "Name: HR, dtype: float64" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data_row['HR'] # that's a Series!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Accessing single elements is also pretty straight-forward:" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "HR 28.0\n", "Temp 38.2\n", "Name: (2013, 1, Guido), dtype: float64" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data_row.loc[2013, 1, 'Guido'] # index triplet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can even slice easily using the first `MultiIndex` (year in our case):" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
HRTemp
yearvisitsubject
20131Bob16.037.7
Guido28.038.2
Sue45.036.7
2Bob46.037.9
Guido45.036.6
Sue25.036.6
20141Bob28.035.8
Guido30.036.9
Sue39.037.0
2Bob26.037.6
Guido22.037.0
Sue64.036.9
\n", "
" ], "text/plain": [ " HR Temp\n", "year visit subject \n", "2013 1 Bob 16.0 37.7\n", " Guido 28.0 38.2\n", " Sue 45.0 36.7\n", " 2 Bob 46.0 37.9\n", " Guido 45.0 36.6\n", " Sue 25.0 36.6\n", "2014 1 Bob 28.0 35.8\n", " Guido 30.0 36.9\n", " Sue 39.0 37.0\n", " 2 Bob 26.0 37.6\n", " Guido 22.0 37.0\n", " Sue 64.0 36.9" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data_row.loc[2013:2017] # 2017 doesn't exist, but Python's slicing rules prevent an exception here\n", "# health_data_row.loc[1] # doesn't work" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Slicing is a bit more difficult when we want to take into account all available indices. This is due to the possible conflicts between the different indices and the columns.\n", "\n", "Assuming we want to look at all the years, with all the visits, only by Bob - we would want to write something like this:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "scrolled": true }, "outputs": [ { "ename": "SyntaxError", "evalue": "invalid syntax (, line 1)", "output_type": "error", "traceback": [ "\u001b[0;36m File \u001b[0;32m\"\"\u001b[0;36m, line \u001b[0;32m1\u001b[0m\n\u001b[0;31m health_data_row.loc[(:, :, 'Bob'), :] # doesn't work\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n" ] } ], "source": [ "health_data_row.loc[(:, :, 'Bob'), :] # doesn't work" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This pickle is solved in two possible ways:\n", "\n", "First option is the [`slice`](https://www.programiz.com/python-programming/methods/built-in/slice) object:\n" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "year visit subject\n", "2013 1 Bob 16.0\n", " 2 Bob 46.0\n", "2014 1 Bob 28.0\n", " 2 Bob 26.0\n", "Name: HR, dtype: float64" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bobs_data = (slice(None), slice(None), 'Bob') # all years, all visits, of Bob\n", "health_data_row.loc[bobs_data, 'HR']\n", "# arr[slice(None), 1] is the same as arr[:, 1]" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "year visit subject\n", "2013 1 Bob 16.0\n", " Guido 28.0\n", " 2 Bob 46.0\n", " Guido 45.0\n", "2014 1 Bob 28.0\n", " Guido 30.0\n", " 2 Bob 26.0\n", " Guido 22.0\n", "Name: HR, dtype: float64" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "row_idx = (slice(None), slice(None), slice('Bob', 'Guido')) # all years, all visits, Bob + Guido\n", "health_data_row.loc[row_idx, 'HR']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another option is the [`IndexSlice`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IndexSlice.html) object:" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
HRTemp
yearvisitsubject
20131Bob16.037.7
2Bob46.037.9
20141Bob28.035.8
2Bob26.037.6
\n", "
" ], "text/plain": [ " HR Temp\n", "year visit subject \n", "2013 1 Bob 16.0 37.7\n", " 2 Bob 46.0 37.9\n", "2014 1 Bob 28.0 35.8\n", " 2 Bob 26.0 37.6" ] }, "execution_count": 63, "metadata": {}, "output_type": "execute_result" } ], "source": [ "\n", "idx = pd.IndexSlice\n", "health_data_row.loc[idx[:, :, 'Bob'], :] # very close to the naive implementation" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "year visit subject\n", "2013 1 Bob 37.7\n", " Guido 38.2\n", "2014 1 Bob 35.8\n", " Guido 36.9\n", "Name: Temp, dtype: float64" ] }, "execution_count": 64, "metadata": {}, "output_type": "execute_result" } ], "source": [ "idx2 = pd.IndexSlice\n", "health_data_row.loc[idx2[2013:2015, 1, 'Bob':'Guido'], 'Temp']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, there's one more way to index into a `MultiIndex` which is very straight-forward and explicit; the [cross-section](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html)." ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
HRTemp
subject
Bob16.037.7
Guido28.038.2
Sue45.036.7
\n", "
" ], "text/plain": [ " HR Temp\n", "subject \n", "Bob 16.0 37.7\n", "Guido 28.0 38.2\n", "Sue 45.0 36.7" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "health_data_row.xs(key=(2013, 1), level=('year', 'visit'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Small caveat: unsorted indices" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having an unsorted index in your `MultiIndex` might make the interpreter pop a few exceptions at you:" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "char int\n", "a 1 0.803176\n", " 2 0.634355\n", "c 1 0.888520\n", " 2 0.023923\n", "b 1 0.727000\n", " 2 0.495458\n", "dtype: float64" ] }, "execution_count": 66, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# char index in unsorted\n", "index = pd.MultiIndex.from_product([['a', 'c', 'b'], [1, 2]])\n", "data = pd.Series(np.random.rand(6), index=index)\n", "data.index.names = ['char', 'int']\n", "data" ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [ { "ename": "UnsortedIndexError", "evalue": "'Key length (1) was greater than MultiIndex lexsort depth (0)'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mUnsortedIndexError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mdata\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'a'\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m'b'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/series.py\u001b[0m in \u001b[0;36m__getitem__\u001b[0;34m(self, key)\u001b[0m\n\u001b[1;32m 1111\u001b[0m \u001b[0mkey\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcheck_bool_indexer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1112\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1113\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_get_with\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1114\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1115\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_get_with\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/series.py\u001b[0m in \u001b[0;36m_get_with\u001b[0;34m(self, key)\u001b[0m\n\u001b[1;32m 1116\u001b[0m \u001b[0;31m# other: fancy integer or otherwise\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1117\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mslice\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1118\u001b[0;31m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_convert_slice_indexer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkind\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"getitem\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1119\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_get_values\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindexer\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1120\u001b[0m \u001b[0;32melif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mABCDataFrame\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py\u001b[0m in \u001b[0;36m_convert_slice_indexer\u001b[0;34m(self, key, kind)\u001b[0m\n\u001b[1;32m 3214\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3215\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 3216\u001b[0;31m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mslice_indexer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstart\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstop\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstep\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkind\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mkind\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3217\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3218\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mis_index_slice\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py\u001b[0m in \u001b[0;36mslice_indexer\u001b[0;34m(self, start, end, step, kind)\u001b[0m\n\u001b[1;32m 5032\u001b[0m \u001b[0mslice\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5033\u001b[0m \"\"\"\n\u001b[0;32m-> 5034\u001b[0;31m \u001b[0mstart_slice\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend_slice\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mslice_locs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstart\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstep\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstep\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkind\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mkind\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 5035\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5036\u001b[0m \u001b[0;31m# return a slice\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/indexes/multi.py\u001b[0m in \u001b[0;36mslice_locs\u001b[0;34m(self, start, end, step, kind)\u001b[0m\n\u001b[1;32m 2579\u001b[0m \u001b[0;31m# This function adds nothing to its parent implementation (the magic\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2580\u001b[0m \u001b[0;31m# happens in get_slice_bound method), but it adds meaningful doc.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2581\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0msuper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mslice_locs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstart\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstep\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkind\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mkind\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2582\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2583\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_partial_tup_index\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtup\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mside\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"left\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py\u001b[0m in \u001b[0;36mslice_locs\u001b[0;34m(self, start, end, step, kind)\u001b[0m\n\u001b[1;32m 5246\u001b[0m \u001b[0mstart_slice\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5247\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mstart\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 5248\u001b[0;31m \u001b[0mstart_slice\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_slice_bound\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstart\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"left\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkind\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 5249\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mstart_slice\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5250\u001b[0m \u001b[0mstart_slice\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/indexes/multi.py\u001b[0m in \u001b[0;36mget_slice_bound\u001b[0;34m(self, label, side, kind)\u001b[0m\n\u001b[1;32m 2523\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlabel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtuple\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2524\u001b[0m \u001b[0mlabel\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mlabel\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2525\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_partial_tup_index\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlabel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mside\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mside\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2526\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2527\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mslice_locs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstart\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstep\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkind\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/.local/lib/python3.8/site-packages/pandas/core/indexes/multi.py\u001b[0m in \u001b[0;36m_partial_tup_index\u001b[0;34m(self, tup, side)\u001b[0m\n\u001b[1;32m 2583\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_partial_tup_index\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtup\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mside\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"left\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2584\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtup\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m>\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlexsort_depth\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2585\u001b[0;31m raise UnsortedIndexError(\n\u001b[0m\u001b[1;32m 2586\u001b[0m \u001b[0;34m\"Key length (%d) was greater than MultiIndex\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2587\u001b[0m \u001b[0;34m\" lexsort depth (%d)\"\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtup\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlexsort_depth\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mUnsortedIndexError\u001b[0m: 'Key length (1) was greater than MultiIndex lexsort depth (0)'" ] } ], "source": [ "data['a':'b']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`lexsort` means \"lexicography-sorted\", or sorted by either number or letter. Sorting an index is done with the [`sort_index()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html) method:" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "char int\n", "a 1 0.803176\n", " 2 0.634355\n", "b 1 0.727000\n", " 2 0.495458\n", "c 1 0.888520\n", " 2 0.023923\n", "dtype: float64\n", "char int\n", "a 1 0.803176\n", " 2 0.634355\n", "b 1 0.727000\n", " 2 0.495458\n", "dtype: float64\n" ] } ], "source": [ "data.sort_index(inplace=True)\n", "print(data)\n", "print(data['a':'b']) # now it works" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Aggregation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Data aggregation using a `MultiIndex` is super simple:" ] }, { "cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
locationdaytemphumidity
0ALSUN12.331
1ALSUN14.145
2NYTUE21.341
3NYWED20.941
4NYSAT18.849
5VASAT16.552
\n", "
" ], "text/plain": [ " location day temp humidity\n", "0 AL SUN 12.3 31\n", "1 AL SUN 14.1 45\n", "2 NY TUE 21.3 41\n", "3 NY WED 20.9 41\n", "4 NY SAT 18.8 49\n", "5 VA SAT 16.5 52" ] }, "execution_count": 69, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
temphumidity
locationday
ALSUN12.331
SUN14.145
NYTUE21.341
WED20.941
SAT18.849
VASAT16.552
\n", "
" ], "text/plain": [ " temp humidity\n", "location day \n", "AL SUN 12.3 31\n", " SUN 14.1 45\n", "NY TUE 21.3 41\n", " WED 20.9 41\n", " SAT 18.8 49\n", "VA SAT 16.5 52" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.set_index(['location', 'day'], inplace=True)\n", "states" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
temphumidity
location
AL13.20000038.000000
NY20.33333343.666667
VA16.50000052.000000
\n", "
" ], "text/plain": [ " temp humidity\n", "location \n", "AL 13.200000 38.000000\n", "NY 20.333333 43.666667\n", "VA 16.500000 52.000000" ] }, "execution_count": 71, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.mean(level='location')" ] }, { "cell_type": "code", "execution_count": 72, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
temphumidity
day
SUN13.2038.0
TUE21.3041.0
WED20.9041.0
SAT17.6550.5
\n", "
" ], "text/plain": [ " temp humidity\n", "day \n", "SUN 13.20 38.0\n", "TUE 21.30 41.0\n", "WED 20.90 41.0\n", "SAT 17.65 50.5" ] }, "execution_count": 72, "metadata": {}, "output_type": "execute_result" } ], "source": [ "states.median(level='day')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`````{admonition} Exercise: Replacing Values\n", "````{hint}\n", "When we wish to replace values in a Series or a DataFrame, two main options come to mind:\n", "\n", "1. A boolean mask (e.g. `df[mask] = \"new value\"`).\n", "2. The [`replace()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.replace.html) method.\n", "\n", "In the following exercise try and explore the second method, which provides powerful custom replacement options.\n", "\n", "````\n", "* Create a (10, 2) dataframe with increasing integer values 0-9 in both columns.\n", "````{dropdown} Solution\n", "```python\n", "data = np.tile(np.arange(10), (2, 1)).T\n", "df = pd.DataFrame(data)\n", "```\n", "````\n", "* Use the `.replace()` method to replace the value 3 in the first column with 99.\n", "````{dropdown} Solution\n", "```python\n", "df.replace({0: 3}, {0: 99})\n", "```\n", "````\n", "* Use it to replace 3 in column 0, and 1 in column 2, with 99.\n", "````{dropdown} Solution\n", "```python\n", "df.replace({0: 3, 1: 1}, 99)\n", "```\n", "````\n", "* Use its `method` keyword to replace values in the range [3, 6) of the first column with 6.\n", "````{dropdown} Solution\n", "```python\n", "df[0].replace(np.arange(3, 6), method='bfill')\n", "```\n", "````\n", "`````" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`````{admonition} MultiIndex Construction and Indexing\n", "* Construct a `MultiIndex` with three levels composed from the product of the following lists:\n", " - `['a', b', 'c', 'd']`\n", " - `['i', 'ii', 'iii']`\n", " - `['x', 'y', 'z']`\n", "````{dropdown} Solution\n", "```python\n", "letters = ['a', 'b', 'c', 'd']\n", "roman = ['i', 'ii', 'iii']\n", "coordinates = ['x', 'y', 'z']\n", "index = pd.MultiIndex.from_product((letters, roman, coordinates))\n", "```\n", "````\n", "* Instantiate a dataframe with the created index and populate it with random values in two columns.\n", "````{dropdown} Solution\n", "```python\n", "size = len(letters) * len(roman) * len(coordinates)\n", "data = np.random.randint(20, size=(size, 2))\n", "df = pd.DataFrame(data, columns=['today', 'tomorrow'], index=index)\n", "```\n", "````\n", "* Use two different methods to extract only the values with an index of `('a', 'ii', 'z')`.\n", "````{dropdown} Solution\n", "Option \\#1:\n", "```python\n", "df.loc['a', 'ii', 'z']\n", "```\n", "Option \\#2:\n", "```python\n", "df.xs(key=('a', 'ii', 'z'))\n", "```\n", "Option \\#3:\n", "```python\n", "idx = pd.IndexSlice\n", "df.loc[idx['a', 'ii', 'z'], :]\n", "```\n", "````\n", "* Slice in two ways the values with an index of `'x'`.\n", "````{dropdown} Solution\n", "Option \\#1:\n", "```python\n", "idx = pd.IndexSlice\n", "df.loc[idx[:, :, 'x'], :]\n", "```\n", "Option \\#2:\n", "```python\n", "df.xs(key='x', level=2)\n", "```\n", "Option \\#3:\n", "```python\n", "df.loc[(slice(None), slice(None), 'x'), :]\n", "```\n", "````\n", "`````" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## _n_-Dimensional Containers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While technically a dataframe is a two-dimensional container, in the next lesson we'll see why it can perform quite efficiently as a pseudo n-dimensional container. \n", "\n", "If you wish to have _true_ n-dimensional DataFrame-like data structures, you should use the `xarray` package and its `xr.DataArray` and `xr.Dataset` objects, which we'll discuss in the next lessons." ] } ], "metadata": { "anaconda-cloud": {}, "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 2 }