Manipulating dataframes in python
From raju
Creating a dataframe
Create a dataframe from list of lists
Create a data frame from two lists
Create a dataframe from a list of dictionaries
Example 1
In [1]: import pandas as pd ...: df = pd.DataFrame([{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}]) ...: df Out[1]: c1 c2 0 10 100 1 11 110 2 12 120
Ref:- https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/
Example 2
>>> import pandas as pd >>> d = [{'points': 50, 'time': '5:00', 'year': 2010}, ... {'points': 25, 'time': '6:00', 'month': "february"}, ... {'points':90, 'time': '9:00', 'month': 'january'}, ... {'points_h1':20, 'month': 'june'}] >>> df = pd.DataFrame(d) >>> print(df) month points points_h1 time year 0 NaN 50.0 NaN 5:00 2010.0 1 february 25.0 NaN 6:00 NaN 2 january 90.0 NaN 9:00 NaN 3 june NaN 20.0 NaN NaN
Ref: http://stackoverflow.com/questions/20638006/convert-list-of-dictionaries-to-dataframe
Create a dataframe from a list of tuples
Create a dataframe from a dictionary
Convert the keys into index names, values to column and specify a column name
In [4]: import pandas as pd d = {'COST': 0.4, 'BRK.B': 0.4, 'CASH': 0.2} df = pd.DataFrame.from_dict(d, orient='index', columns=['weight']) In [5]: df Out[5]: weight COST 0.4 BRK.B 0.4 CASH 0.2
Tested with pandas 1.5.3.
Convert each item of the dictionary into a dataframe row and specify the column names while creating the dataframe.
In [1]: import pandas as pd d = { '2021-07-02': 392, '2021-07-06': 391, '2021-06-29': 400, '2021-06-28': 395 } df = pd.DataFrame(d.items(), columns=['date', 'value']) df Out[1]: date value 0 2021-07-02 392 1 2021-07-06 391 2 2021-06-29 400 3 2021-06-28 395
Make keys of the dataframe into column names.
>>> import pandas as pd >>> d1 = {'key':1, 'foo':2, 'bar':3} >>> d1 {'key': 1, 'foo': 2, 'bar': 3} >>> pd.DataFrame([d1]) bar foo key 0 3 2 1
Create a dataframe from dictionary of dictionaries
Say we have a dictionary of dictionaries of the form:
{'user':{movie:rating} }
For example,
{Jill': {'Avenger: Age of Ultron': 7.0, 'Django Unchained': 6.5, 'Gone Girl': 9.0, 'Kill the Messenger': 8.0} 'Toby': {'Avenger: Age of Ultron': 8.5, 'Django Unchained': 9.0, 'Zoolander': 2.0}}
To convert it into dataframe
>>> d1 = {'Jill': {'Django Unchained': 6.5, 'Gone Girl': 9.0, 'Kill the Messenger': 8.0, 'Avenger: Age of Ultron': 7.0}, ... 'Toby': {'Django Unchained': 9.0, 'Zoolander': 2.0, 'Avenger: Age of Ultron': 8.5}} >>> import pandas as pd >>> pd.DataFrame(d1) Jill Toby Avenger: Age of Ultron 7.0 8.5 Django Unchained 6.5 9.0 Gone Girl 9.0 NaN Kill the Messenger 8.0 NaN Zoolander NaN 2.0 >>> pd.DataFrame.from_dict(d1) Jill Toby Avenger: Age of Ultron 7.0 8.5 Django Unchained 6.5 9.0 Gone Girl 9.0 NaN Kill the Messenger 8.0 NaN Zoolander NaN 2.0 >>> pd.DataFrame.from_dict(d1, orient='index') Django Unchained Gone Girl Kill the Messenger Avenger: Age of Ultron \ Jill 6.5 9.0 8.0 7.0 Toby 9.0 NaN NaN 8.5 Zoolander Jill NaN Toby 2.0
Create a dataframe from a series of lists
Create a dataframe from two series
To import the series as rows
>>> import pandas as pd >>> import numpy as np >>> s1 = pd.Series([1, 2, 3, 5, 8, 9, 0, np.nan, 7, np.nan]); s2 = pd.Series([0, 1, 2, 3, 4, 5, 6, 8, np.nan, np.nan]) >>> pd.DataFrame([s1, s2]) 0 1 2 3 4 5 6 7 8 9 0 1.0 2.0 3.0 5.0 8.0 9.0 0.0 NaN 7.0 NaN 1 0.0 1.0 2.0 3.0 4.0 5.0 6.0 8.0 NaN NaN
To import them as columns
>>> pd.DataFrame({'s1': s1, 's2': s2}) s1 s2 0 1.0 0.0 1 2.0 1.0 2 3.0 2.0 3 5.0 3.0 4 8.0 4.0 5 9.0 5.0 6 0.0 6.0 7 NaN 8.0 8 7.0 NaN 9 NaN NaN
Create a dataframe from a bunch of variables
Create a dataframe with row index
$python Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> pd.DataFrame([{'foo': 1.1, 'bar': 2.2}]) bar foo 0 2.2 1.1 >>> pd.DataFrame([{'foo': 1.1, 'bar': 2.2}], index=['baz']) bar foo baz 2.2 1.1
Create an empty data frame
>>> import pandas as pd >>> a = pd.DataFrame(None) >>> a Empty DataFrame Columns: [] Index: [] >>> type(a) <class 'pandas.core.frame.DataFrame'>
To check if the dataframe is empty
>>> a.empty True
Create an empty dataframe with column names
>>> import pandas as pd >>> a = pd.DataFrame(columns=['x', 'y', 'z']) >>> a Empty DataFrame Columns: [x, y, z] Index: [] >>> type(a) <class 'pandas.core.frame.DataFrame'>
Check that it is an empty dataframe
>>> a.empty True
Create a new dataframe with fewer columns
To select the foo and bar columns from all_df dataframe and create a new dataframe called df
df = all_df[['foo', 'bar']].copy()
Create a dataframe using a matrix of random numbers
In [1]: import pandas as pd ...: import numpy as np ...: df = pd.DataFrame(np.random.randn(5,3)) ...: df Out[1]: 0 1 2 0 -1.423967 0.798794 -1.144020 1 -1.667792 0.772769 1.161315 2 -1.430745 -0.573701 0.074876 3 -0.673812 0.534825 -0.934246 4 -1.773546 1.293830 1.113970
Create a dataframe using numpy random numbers
In [1]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : np.random.randn(8), 'D' : np.random.randn(8)}) Out[1]: A B C D 0 foo one 0.262732 0.089163 1 bar one -1.591000 0.646790 2 foo two -0.912634 -0.737303 3 bar three 0.417209 0.311601 4 foo two 0.034521 0.679122 5 bar two 0.328215 1.504696 6 foo one 0.256409 -0.366747 7 foo three -1.647533 0.509802
Create a dataframe using numpy array
$ ipython Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 18:50:55) [MSC v.1915 64 bit (AMD64)] In [1]: import numpy as np In [2]: import pandas as pd In [3]: a = np.array([[1,2,3], [4,5,6]]); a Out[3]: array([[1, 2, 3], [4, 5, 6]]) In [4]: b = pd.DataFrame(a, columns=list('pqr')); b Out[4]: p q r 0 1 2 3 1 4 5 6
Create a dataframe from multiple numpy 1d arrays
$ ipython Python 3.11.3 | packaged by Anaconda, Inc. | (main, Apr 19 2023, 23:46:34) [MSC v.1916 64 bit (AMD64)] IPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import numpy as np import pandas as pd np.random.seed(0) a1 = np.random.randn(6) a2 = np.random.randn(6) df = pd.DataFrame({'a1': a1, 'a2':a2}) In [2]: [type(a1), type(a2), type(df)] Out[2]: [numpy.ndarray, numpy.ndarray, pandas.core.frame.DataFrame] In [3]: [a1.shape, a2.shape, df.shape] Out[3]: [(6,), (6,), (6, 2)] In [4]: print(a1) [ 1.76405235 0.40015721 0.97873798 2.2408932 1.86755799 -0.97727788] In [5]: print(a2) [ 0.95008842 -0.15135721 -0.10321885 0.4105985 0.14404357 1.45427351] In [6]: print(df) a1 a2 0 1.764052 0.950088 1 0.400157 -0.151357 2 0.978738 -0.103219 3 2.240893 0.410599 4 1.867558 0.144044 5 -0.977278 1.454274
read dataframe from stdin
read multiple csv files into a dataframe
import pandas as pd all_files = ("file_1.txt", "file_2.txt") dfg = (pd.read_csv(f, sep=',', low_memory=False) for f in all_files) df = pd.concat(dfg, ignore_index=True)
another way
import pandas as pd all_files = ("file_1.txt", "file_2.txt") frames = [] for f in all_files: if not os.path.isfile(f): print("Warning: File", f, "does not exist. Skipping it.") continue cur_frame = pd.read_csv(f, low_memory=False) frames.append(cur_frame) df = pd.concat(frames)
snippet 1:
csv_files = [os.path.join(dir, file) for file in os.listdir(dir) if file.endswith('.csv')]
snippet 2:
all_csv_files = [os.path.join(root, file) for root, dirs, files in os.walk(dir) for file in files if file.endswith('.csv')]
snippet 3:
big_df = pd.concat((pd.read_csv(f, sep=';') for f in glob.glob(path + "/*.csv")), ignore_index=True)
Fill a dataframe row by row
In [1]: import pandas as pd ...: df = pd.DataFrame(columns=['c1', 'c2']) ...: for i in range(3): ...: df = df.append({'c1': i+5, 'c2': i+10}, ignore_index=True) ...: df Out[1]: c1 c2 0 5 10 1 6 11 2 7 12
Ref:- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html
Note:- pandas (asof 0.24.2) does not support in place appends.
Fill a dataframe using row index
Consider the dataframe
$python Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> df = pd.DataFrame(columns=['a','b','c','d'], index=['x','y','z']) >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z NaN NaN NaN NaN
To change the elements of an existing row
>>> df.loc['z'] = {'a':1, 'b':5, 'c':2, 'd':3} >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z 1 5 2 3
To add a new row with an index
>>> df.loc['p'] = {'a':3, 'b':1, 'c':4, 'd':2} >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z 1 5 2 3 p 3 1 4 2
The above method only works if you are assigning values to all the columns. For example
>>> df.loc['q'] = {'a':2, 'b':3} Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\ProgramData\Continuum\Anaconda\envs\py36\lib\site-packages\pandas\core\indexing.py", line 179, in __setitem__ self._setitem_with_indexer(indexer, value) File "C:\ProgramData\Continuum\Anaconda\envs\py36\lib\site-packages\pandas\core\indexing.py", line 419, in _setitem_with_indexer raise ValueError("cannot set a row with " ValueError: cannot set a row with mismatched columns
To do this, use instead
>>> df.loc['q', 'a'] = 2 >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z 1 5 2 3 p 3 1 4 2 q 2 NaN NaN NaN >>> df.loc['q', 'b'] = 3 >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z 1 5 2 3 p 3 1 4 2 q 2 3 NaN NaN
Create a dataframe with a specified number of rows
In [1]: import pandas as pd ...: df = pd.DataFrame(index=range(5), columns=list('AB')) ...: df Out[1]: A B 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN
create a dataframe by splitting strings
Creating a series
Create series with all NaN values
series2 = pd.Series(np.nan * np.ones(shape=series1.shape))
demonstrates | how to create an array of nan values using numpy functions
tags | series of size N, create a series of length N with NaN values
select some rows and columns
tags | loc on a boolean column
select rows and columns based on position
tags | using iloc, print first N values after sorting, select values based on row position and column label
select row with first True value and last None
df[df['F'] == True].iloc[0:1] df[df['F'].isnull()].iloc[-1:]
tags | using iloc
Select rows with multiple constraints
df[(df['foo'] >= FOO) & (df['bar'] <= BAR)]
where foo, bar are columns in dataframe df, FOO, BAR are some thresholds.
See also:
- http://chrisalbon.com/python/pandas_index_select_and_filter.html
- https://github.com/chrisalbon/Data-Science-For-Political-And-Social-Phenomena/issues/30
Select rows where values in a column are None
df[df['foo'].isnull()]
To do the opposite
df[~df['foo'].isnull()]
tags | to select rows where column values are not None
Select columns based on dtype
Use pd.select_dtypes() to decompose data based on its type. For example
def decompose_data(df): d = {} d['float'] = df.select_dtypes(include=[np.float]) d['int'] = df.select_dtypes(include=[np.int]) d['object'] = df.select_dtypes(include=[np.object]) return d
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.select_dtypes.html
- hierarchy of dtypes - https://docs.scipy.org/doc/numpy/reference/arrays.scalars.html
- To do it from scratch - https://stackoverflow.com/a/21720133/6305733
tags | get all integer columns in a dataframe
select only some columns
- To select columns
df2 = df1[['col1', 'col2']]
Ref:- http://pandas.pydata.org/pandas-docs/stable/indexing.html
select all except some columns
use df.columns.isn()
To select everything except 'col1'
df.loc[:,~df.columns.isin(['col1'])]
To drop multiple columns
df.loc[:, ~df.columns.isin(['col1', 'col2'])]
See also:
- sample notebook (nbviewer.jupyter.org/github/KamarajuKusumanchi)
- https://stackoverflow.com/questions/29763620/how-to-select-all-columns-except-one-column-in-pandas
select element by row and column label
Use
df.loc[row_label, col_label]
to select an element by row and column labels. For example,
>>> import pandas as pd >>> df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [1,2,3]}) >>> df a b c 0 1 4 1 1 2 5 2 2 3 6 3 >>> df.loc[1,'b'] 5
The row labels can also be strings instead of integers.
>>> df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [1,2,3]}, index=['x', 'y', 'z']) >>> df a b c x 1 4 1 y 2 5 2 z 3 6 3 >>> df.loc['y','b'] 5
split columns
based on another column
find values based on another column
- Use '==' or .isin([ ]) or .str.contains(regex) to find the rows
- Filter them using .loc[mask, cols].
- If cols is a string, this will give a Series
- if cols is a list, this will give a DataFrame
- Use
- .values to get it as a numpy array
- .iloc[0] if only the first element is needed
Example 1: ('=='; cols is a string; .values and .iloc[0])
$ ipython Python 3.8.3 (default, May 19 2020, 06:50:17) [MSC v.1916 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd ...: df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': ['p1', 'p2', 'p3', 'p4']}) ...: df Out[1]: A B 0 1 p1 1 2 p2 2 3 p3 3 4 p4 In [2]: df.loc[df['A'] == 3, 'B'] Out[2]: 2 p3 Name: B, dtype: object In [3]: type(df.loc[df['A'] == 3, 'B']) Out[3]: pandas.core.series.Series In [4]: df.loc[df['A'] == 3, 'B'].values Out[4]: array(['p3'], dtype=object) In [5]: type(df.loc[df['A'] == 3, 'B'].values) Out[5]: numpy.ndarray In [6]: df.loc[df['A'] == 3, 'B'].iloc[0] Out[6]: 'p3' In [7]: type(df.loc[df['A'] == 3, 'B'].iloc[0]) Out[7]: str
Example 2: (.isin([]); cols is a string; .values and .iloc[0])
# Using the same data frame from the previous example. In [8]: df.loc[df['A'].isin([2, 4]), 'B'] Out[8]: 1 p2 3 p4 Name: B, dtype: object In [9]: type(df.loc[df['A'].isin([2, 4]), 'B']) Out[9]: pandas.core.series.Series In [10]: df.loc[df['A'].isin([2, 4]), 'B'].values Out[10]: array(['p2', 'p4'], dtype=object) In [11]: type(df.loc[df['A'].isin([2, 4]), 'B'].values) Out[11]: numpy.ndarray In [12]: df.loc[df['A'].isin([2, 4]), 'B'].iloc[0] Out[12]: 'p2' In [13]: type(df.loc[df['A'].isin([2, 4]), 'B'].iloc[0]) Out[13]: str
Example 3: (.str.contains(' '); cols is a string; .values)
$ ipython Python 3.8.3 (default, May 19 2020, 06:50:17) [MSC v.1916 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd ...: df = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': ['hi', 'foo', 'fat', 'cat', 'tap']}) ...: df Out[1]: A B 0 1 hi 1 2 foo 2 3 fat 3 4 cat 4 5 tap In [2]: df.loc[df['B'].str.contains('t'), 'A'] Out[2]: 2 3 3 4 4 5 Name: A, dtype: int64 In [3]: df.loc[df['B'].str.contains('t'), 'A'].values Out[3]: array([3, 4, 5], dtype=int64) In [4]: df.loc[df['B'].str.contains('t$'), 'A'] Out[4]: 2 3 3 4 Name: A, dtype: int64 In [5]: df.loc[df['B'].str.contains('t$'), 'A'].values Out[5]: array([3, 4], dtype=int64)
Ref:
- https://stackoverflow.com/questions/36684013/extract-column-value-based-on-another-column-pandas-dataframe
- https://stackoverflow.com/questions/15325182/how-to-filter-rows-in-pandas-by-regex
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html
tags | isin regex, find value in a column when another column is equal to something, get value in a column if another column equals something
build one column from another column
Consider
$ ipython Python 3.8.3 (default, May 19 2020, 06:50:17) [MSC v.1916 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd ...: df = pd.DataFrame({'a': ['kama', 'raju']}) ...: df Out[1]: a 0 kama 1 raju In [2]: df['b'] = 'foo_' + df['a'] + '_bar' ...: df Out[2]: a b 0 kama foo_kama_bar 1 raju foo_raju_bar In [3]: df['c'] = ['foo_' + i + '_bar' for i in df['a']] ...: df Out[3]: a b c 0 kama foo_kama_bar foo_kama_bar 1 raju foo_raju_bar foo_raju_bar
For normal lists
In [4]: a = ['kama', 'raju'] ...: a Out[4]: ['kama', 'raju'] In [5]: b = ['foo_' + i + '_bar' for i in a] ...: b Out[5]: ['foo_kama_bar', 'foo_raju_bar']
Sum of values in a column when another column is 1
Consider the dataframe
>>> df = pd.DataFrame({'a':[1,1,2,1,2], 'b':[5,7,3,3,5]}) >>> df a b 0 1 5 1 1 7 2 2 3 3 1 3 4 2 5
To get the sum of values of b when column a is 1
>>> df.loc[df['a'] == 1, 'b'].sum() 15
To get the sum of values of b when column a is 2
>>> df.loc[df['a'] == 2, 'b'].sum() 8
Update a column with new data
Given two dataframes df1 and df2, each with two columns a and b, the idea here is to create a new dataframe with values in
- df1 if an entry exists only in df1
- df2 if an entry exists in both df1 and df2
- df2 if an entry exists only in df2
For example, given
df1 a b 0 1 18 1 2 19 2 3 20 3 4 21 4 5 22 df2 a b 0 5 23 1 4 24 2 6 25
We want
df3 a b 0 1 18 1 2 19 2 3 20 3 4 24 4 5 23 5 6 25
Solution:
import pandas as pd df1 = pd.DataFrame({'a': [1,2,3,4,5], 'b': [18, 19, 20, 21, 22]}) df2 = pd.DataFrame({'a': [5,4,6], 'b': [23, 24, 25]}) df3 = pd.merge(df1, df2, how='outer', on='a') df3.loc[df3['b_y'].isna(), 'b_y'] = df3['b_x'] df3.drop(['b_x'], axis=1, inplace=True) df3.rename(columns={'b_y':'b'}, inplace=True) print('df3') print(df3)
Sample notebook - nbviewer.jupyter.org/github/KamarajuKusumanchi
tags | override if exists, replace column values, merge with overlay
insert a column at a specific location
To insert 'bar' column after 'foo' column
if 'bar' not in df.columns: df.insert(df.columns.get_loc('foo')+1, 'bar', value)
To insert 'bar' before 'foo'
if 'bar' not in df.columns: df.insert(df.columns.get_loc('foo'), 'bar', value)
df.insert() will throw an error if you try to insert something that already exists. So it is a good idea to check for its existence first.
You can also insert by position
if 'foo' not in df.columns: df.insert(loc, 'foo', value)
So, for example, using loc = 0 will insert the new column at the beginning.
Example:-
% ipython Python 3.8.5 (default, Sep 4 2020, 07:30:14) Type 'copyright', 'credits' or 'license' for more information IPython 7.18.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd df = pd.DataFrame({'B': [4,5,6], 'E': [13,14,15]}) df Out[1]: B E 0 4 13 1 5 14 2 6 15 In [2]: df.insert(df.columns.get_loc('B')+1, 'C', [7,8,9]) df Out[2]: B C E 0 4 7 13 1 5 8 14 2 6 9 15 In [3]: df.insert(df.columns.get_loc('E'), 'D', [10,11,12]) df Out[3]: B C D E 0 4 7 10 13 1 5 8 11 14 2 6 9 12 15 In [4]: df.insert(0, 'A', [1,2,3]) df Out[4]: A B C D E 0 1 4 7 10 13 1 2 5 8 11 14 2 3 6 9 12 15
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html - shows additional optional arguments.
- https://stackoverflow.com/questions/18674064/how-do-i-insert-a-column-at-a-specific-column-index-in-pandas
append a string column in a chain
import pandas as pd df = pd.DataFrame({'letters': ['a', 'b', 'c', 'd'], 'numbers': [1,2,3,4]}) print df df2 = df.assign(alphanum='ALL') print df2 print df
gives
letters numbers 0 a 1 1 b 2 2 c 3 3 d 4 letters numbers alphanum 0 a 1 ALL 1 b 2 ALL 2 c 3 ALL 3 d 4 ALL letters numbers 0 a 1 1 b 2 2 c 3 3 d 4
Note:- the assign operation does not change the original dataframe.
append an integer column in a chain
import pandas as pd df = pd.DataFrame({'letters': ['a', 'b', 'c', 'd'], 'numbers': [1,2,3,4]}) print df df2 = df.assign(alphanum=int(float('12.2'))) print df2 print df2.dtypes
gives
letters numbers 0 a 1 1 b 2 2 c 3 3 d 4 letters numbers alphanum 0 a 1 12 1 b 2 12 2 c 3 12 3 d 4 12 letters object numbers int64 alphanum int64 dtype: object
append multiple columns in a chain
import pandas as pd df = pd.DataFrame({'letters': ['a', 'b', 'c', 'd'], 'numbers': [1,2,3,4]}) print df df2 = df.assign(alphanum=[1.2]*df.shape[0], beta=[5,6,7,8]) print df2
gives
letters numbers 0 a 1 1 b 2 2 c 3 3 d 4 letters numbers alphanum beta 0 a 1 1.2 5 1 b 2 1.2 6 2 c 3 1.2 7 3 d 4 1.2 8
You can also use
df2 = df.assign(alphanum=1.2, beta=[5,6,7,8]) print df2
which gives the same result.
assign value based on a value in another column
df.loc[df['foo'] == FOO, 'bar']] = BAR
The 'bar' column need not exist beforehand.
In [1]: import pandas as pd ...: df = pd.DataFrame({'c1': [2,5,6]}) ...: df Out[1]: c1 0 2 1 5 2 6 In [2]: df.loc[df['c1'] == 5, 'c2'] = 25 ...: df Out[2]: c1 c2 0 2 NaN 1 5 25.0 2 6 NaN
Append a constant value as a new column
a['bar'] = 'snack'
This will work with dataframes of any length (0 or more) as shown in (github.com/KamarajuKusumanchi)
Note:
a['bar'] = 'snack'
is equivalent to
a['bar'] = ['snack']*len(a)
prepend one element to a Series
In [1]: import pandas as pd ...: a = pd.Series(range(3,8)) ...: print(a) 0 3 1 4 2 5 3 6 4 7 dtype: int64 In [2]: b = pd.Series([2]).append(a, ignore_index=True) ...: print(b) 0 2 1 3 2 4 3 5 4 6 5 7 dtype: int64
If ignore_index is not set, the ouptut will be
In [3]: c = pd.Series([2]).append(a) ...: print(c) 0 2 0 3 1 4 2 5 3 6 4 7 dtype: int64
Ref:- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.append.html
Find the relative difference between two series
% python3 Python 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170118] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> import numpy as np >>> s1 = pd.Series([1, 2, 3, 5, 8, 9, 0, np.nan, 7, np.nan]); s2 = pd.Series([0, 1, 2, 3, 4, 5, 6, 8, np.nan, np.nan]) >>> pd.DataFrame([s1, s2]) 0 1 2 3 4 5 6 7 8 9 0 1.0 2.0 3.0 5.0 8.0 9.0 0.0 NaN 7.0 NaN 1 0.0 1.0 2.0 3.0 4.0 5.0 6.0 8.0 NaN NaN >>> s1/s2 0 inf 1 2.000000 2 1.500000 3 1.666667 4 2.000000 5 1.800000 6 0.000000 7 NaN 8 NaN 9 NaN dtype: float64 >>> s1/s2 -1 0 inf 1 1.000000 2 0.500000 3 0.666667 4 1.000000 5 0.800000 6 -1.000000 7 NaN 8 NaN 9 NaN dtype: float64
a vs b
isna() vs isnull()
There are four functions:
- pandas.DataFrame.isna - https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isna.html
- pandas.DataFrame.isnull - https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isnull.html
- pandas.isnull - https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.isnull.html
- numpy.isnan - https://numpy.org/doc/stable/reference/generated/numpy.isnan.html
The first two work on a DataFrame. The latter two work on an array-like object.
There is no difference between the first two.
See also:
- https://datascience.stackexchange.com/questions/37878/difference-between-isna-and-isnull-in-pandas - gives more details
- https://github.com/pandas-dev/pandas/blob/master/pandas/core/dtypes/missing.py#L127 - shows that pandas.DataFrame.isnull() is just aliased to pandas.DataFrame.isna()
Style recommendation: use pandas.DataFrame.isna() instead of pandas.DataFrame.isnull(). The former is less typing!
join vs merge
What is the difference between pandas.DataFrame.merge and pandas.DataFrame.join?
relevant links:
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html
- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html
- https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
df['foo'] vs df.foo
Use df['foo'] instead of df.foo
Disdvantage of df.foo:
- If foo is used by pd.DataFrame(), code breaks.
Also, we cannot rely on catching this at runtime. For example, foo may not be a keyword in the current version of pandas. But it can be in a future version or in an older version. So the code might work with one pandas version and not another. Not worth the headache.
Disadvantage of df['foo']:
- makes code less readable
ix vs loc, iloc
- https://stackoverflow.com/questions/31593201/how-are-iloc-ix-and-loc-different - very good explanation.
- http://pandas-docs.github.io/pandas-docs-travis/user_guide/indexing.html#ix-indexer-is-deprecated
axis=1 vs axis=0
- https://stackoverflow.com/a/49884677/6305733 - very good explanation. Contains a picture that makes it easy to remember.
Missing data
count the number of non null values in a column
Solution: Use
df[col].count()
or
df[col].size - df[col].isna().sum()
Explanation:
- df['foo'].size - count all including null values
- df['foo'].count() - count non-null values
Example 1:
In [1]: import pandas as pd df = pd.DataFrame({'A': [3, None, 5, 7]}) df Out[1]: A 0 3.0 1 NaN 2 5.0 3 7.0 In [2]: [df['A'].size, df['A'].size - df['A'].isna().sum(), df['A'].count()] Out[2]: [4, 3, 3]
Example 2:
In [1]: import pandas as pd import numpy as np df = pd.DataFrame({'A': [3, np.nan, 5, 7]}) df Out[1]: A 0 3.0 1 NaN 2 5.0 3 7.0 In [2]: [df['A'].size, df['A'].size - df['A'].isna().sum(), df['A'].count()] Out[2]: [4, 3, 3]
remove rows that are all nan
also demonstrates | remove blank columns in a dataframe
df.dropna(axis=0, how='all')
For example
$ ipython In [1]: import pandas as pd ...: import numpy as np ...: df = pd.DataFrame({ ...: 'a': [1.2, 2.3, np.nan, np.nan], ...: 'b': [1.3, np.nan, 2.4, np.nan], ...: 'c': [np.nan]*4}) In [2]: df Out[2]: a b c 0 1.2 1.3 NaN 1 2.3 NaN NaN 2 NaN 2.4 NaN 3 NaN NaN NaN
Remove rows that are all nan
In [3]: df.dropna(axis=0, how='all') Out[3]: a b c 0 1.2 1.3 NaN 1 2.3 NaN NaN 2 NaN 2.4 NaN
Remove columns that are all nan
In [4]: df.dropna(axis=1, how='all') Out[4]: a b 0 1.2 1.3 1 2.3 NaN 2 NaN 2.4 3 NaN NaN
stringify nans
replace a string with NaN
replace missing values with a constant in some columns
tags | fillna on some columns
obj_coi = ['A', 'B'] flt_coi = ['C', 'E'] df[obj_coi] = df[obj_coi].fillna(value='UNKNOWN') df[flt_coi] = df[flt_coi].fillna(value=999999)
Sample code:-
replace missing values with a serial number
tags | Serialize NaNs by column, replace missing values with a sequence, dataframe fillna with sequence, dataframe fill nan values with a range, replace nan with a serial number
def serialize_nans_by_column(df, cols, inplace): mask = df[cols].isna() if not inplace: df_new = df.copy() else: df_new = df for col in cols: df_new.loc[mask[col], col] = range(1,1+mask[col].sum()) return df_new
Sample code:-
groupby with nan values
$ ipython Python 3.8.1 (default, Mar 2 2020, 13:06:26) [MSC v.1916 64 bit (AMD64)] IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd ...: import numpy as np ...: df = pd.DataFrame({'a': ['1', '2', '3', '4', '5'], ...: 'b': ['4', np.nan, '6', '4', np.nan], ...: 'c': [4, np.nan, 6, 4, np.nan], ...: 'd': [7, 3, np.nan, 9, np.nan]}) ...: df Out[1]: a b c d 0 1 4 4.0 7.0 1 2 NaN NaN 3.0 2 3 6 6.0 NaN 3 4 4 4.0 9.0 4 5 NaN NaN NaN In [2]: df.groupby(['b']).groups Out[2]: {'4': Int64Index([0, 3], dtype='int64'), '6': Int64Index([2], dtype='int64')} In [3]: df.groupby(['c']).groups Out[3]: {4.0: Int64Index([0, 3], dtype='int64'), 6.0: Int64Index([2], dtype='int64')} In [4]: df.groupby(['d']).groups Out[4]: {3.0: Int64Index([1], dtype='int64'), 7.0: Int64Index([0], dtype='int64'), 9.0: Int64Index([3], dtype='int64')} In [5]: df.groupby(['b', 'c']).groups Out[5]: {('4', 4.0): Int64Index([0, 3], dtype='int64'), (nan, nan): Int64Index([1], dtype='int64'), ('6', 6.0): Int64Index([2], dtype='int64'), (nan, nan): Int64Index([4], dtype='int64')} In [6]: df.groupby(['b', 'd']).groups Out[6]: {('4', 7.0): Int64Index([0], dtype='int64'), (nan, 3.0): Int64Index([1], dtype='int64'), ('6', nan): Int64Index([2], dtype='int64'), ('4', 9.0): Int64Index([3], dtype='int64'), (nan, nan): Int64Index([4], dtype='int64')} In [7]: df.groupby(['c', 'd']).groups Out[7]: {(4.0, 7.0): Int64Index([0], dtype='int64'), (nan, 3.0): Int64Index([1], dtype='int64'), (4.0, 9.0): Int64Index([3], dtype='int64'), (6.0, nan): Int64Index([2], dtype='int64'), (nan, nan): Int64Index([4], dtype='int64')}
Use fillna to handle the missing values ahead of groupby
% python3 Python 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170118] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> import numpy as np >>> df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']}) >>> df a b 0 1 4 1 2 NaN 2 3 6 >>> df.groupby('b').groups {'4': Int64Index([0], dtype='int64'), '6': Int64Index([2], dtype='int64')} >>> df.fillna(-1).groupby('b').groups {'4': Int64Index([0], dtype='int64'), '6': Int64Index([2], dtype='int64'), -1: Int64Index([1], dtype='int64')}
Ref:- https://stackoverflow.com/questions/18429491/groupby-columns-with-nan-missing-values
missing integer data
Useful links:
- https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#integer-dtypes-and-missing-data
- https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html
- https://pandas.pydata.org/pandas-docs/stable/user_guide/gotchas.html#gotchas-intna
read_csv()
read all columns as strings
ignore lines with comments
pd.read_csv(file_name, comment='#')
Ref:- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
preserve comment lines
# Read the header and data input_file = 'foo.csv' with open(input_file) as fh: # Assume we have 3 header lines header = [fh.readline() for line in range(3)] df = pd.read_csv(fh) # do something with df # Then save it with the old headers output_file = 'bar.csv' with open(output_file, 'w') as fh: for line in header: fh.write('%s' % line) df.to_csv(fh, index=False)
pass url
Since pandas 0.19.2, you can pass a url directly to pandas.read_csv()
Sample code - https://github.com/KamarajuKusumanchi/market_data_processor/blob/master/deprecated/google_finance.py
assign column names
use
names = [list of column names]
for example
pd.read_csv(fname, index_col=None, header=None, names = ['foo', 'bar'])
Ref:- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
to_csv()
pass through to automate directory creation
def to_csv(df, dir_name, file_name, **kwargs): # This is a pass through function for DataFrame.to_csv() # where the parent directory is created if it does not # already exist. if not os.path.exists(dir_name): os.makedirs(dir_name) file_path = os.path.join(dir_name, file_name) df.to_csv(file_path, **kwargs)
tags | create directory on the fly when using to_csv, extend to_csv, passing kwargs to another function
hardcode unix style line endings
df.to_csv(path_or_buf, line_terminator='\n')
Note:- Default is os.linesep . Possible values are '\n' for linux, '\r\n' for windows.
Ref:- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html
tags | line terminator
insert a new column into a csv file
# Read the data input_file = 'foo.csv' with open(input_file) as fh: # Assume we have 3 header lines header = [fh.readline() for line in range(3)] df = pd.read_csv(fh) # Add the new column # new_col is a string, the name of the new column. # new_pos is an integer. Set it to 0 to insert at the beginning. # new_vals can be a scalar, pandas series etc., # Assume new_col, new_pos, new_vals are defined elsewhere. if new_col not in df.columns: df.insert(new_pos, new_col, new_vals) # Save it to a new file output_file = 'bar.csv' with open(output_file, 'w') as fh: for line in header: fh.write('%s' % line) df.to_csv(fh, index=False)
check if
check if a column exists in a dataframe
if 'foo' in df.columns:
Or
if 'foo' in df:
check if a value is in a column
val in df['col'].values
check if at least one element is true in a dataframe column
df['foo'].any()
sample notebook (github.com/KamarajuKusumanchi)
Check if two dataframes are equal
To check if two dataframes are equal and ignore the order of rows & columns during comparison
from pandas.testing import assert_frame_equal assert_frame_equal(result, expected, check_like=True)
Ref:-
- Documentation - https://pandas.pydata.org/pandas-docs/stable/generated/pandas.testing.assert_frame_equal.html
- In action - https://github.com/KamarajuKusumanchi/market_data_processor/blob/master/tests/test_google_finance.py
check if a column has nan values
df['foo'].isnull().any()
to get those rows
df[df['foo'].isnull()]
sample notebook (github.com/KamarajuKusumanchi)
check if column values exist in another dataframe column
df1['check'] = df1['col1'].isin(df2['col2'])
Example:-
In [5]: import pandas as pd ...: df1 = pd.DataFrame({'col1': [1,2,3,4,5]}) ...: df2 = pd.DataFrame({'col1': [0,1,1,1], 'col2': [2,2,3,4]}) In [6]: df1 Out[6]: col1 0 1 1 2 2 3 3 4 4 5 In [7]: df2 Out[7]: col1 col2 0 0 2 1 1 2 2 1 3 3 1 4
Add a new column to df1 that indicates if values in df['col1'] exist anywhere in df2['col2']
In [8]: df1['check'] = df1['col1'].isin(df2['col2']) ...: df1 Out[8]: col1 check 0 1 False 1 2 True 2 3 True 3 4 True 4 5 False
To do the same but check multiple columns instead of a single column
In [9]: df1['check'] = df1['col1'].isin(df2['col1']) \ ...: | df1['col1'].isin(df2['col2']) ...: df1 Out[9]: col1 check 0 1 True 1 2 True 2 3 True 3 4 True 4 5 False
Summarized from | https://stackoverflow.com/questions/57693908/find-column-value-in-dataframe/
check if differences are smaller than threshold
$ ipython Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd ...: import numpy as np ...: measures = ['a', 'b', 'c', 'd', 'e'] ...: df1 = pd.DataFrame(np.random.rand(10,5), columns=measures); df2 = pd.DataFrame(np.random.rand(10,5), columns=measures); ...: max_diff_allowed = pd.Series([0.8, 0.7, 0.6, 0.9, 0.2], index=measures); ...: max_diff_got = (df1 - df2).abs().max() ...: print(max_diff_got <= max_diff_allowed) ...: ...: assert (max_diff_got <= max_diff_allowed).all(),\ ...: 'Differences exceeded thresholds.\n'\ ...: 'diff_allowed = \n{}\n'\ ...: 'diff_got = \n{}\n'\ ...: .format(max_diff_allowed, max_diff_got) a True b True c False d True e False dtype: bool --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-1-9f36e6126ff8> in <module>() 7 print(max_diff_got <= max_diff_allowed) 8 ----> 9 assert (max_diff_got <= max_diff_allowed).all(),'Differences exceeded thresholds.\n''diff_allowed = \n{}\n''diff_got = \n{}\n'.format(max_diff_allowed, max_diff_got) AssertionError: Differences exceeded thresholds. diff_allowed = a 0.8 b 0.7 c 0.6 d 0.9 e 0.2 dtype: float64 diff_got = a 0.646379 b 0.453449 c 0.803813 d 0.385751 e 0.652210 dtype: float64
convert stuff
convert a floating point column to int column
for int32
df['col'] = df['col'].astype(int)
for int64
df['col'] = df['col'].astype(np.int64)
convert number strings to float
tags | convert numbers with commas to float, convert numbers in parentheses to negative
The idea here is to convert number strings such as 123,456.78 to 123456.78 and (123,456.78) to -123456.78
df['col'].replace( '[,)]', '', regex=True)\ .replace( '[(]', '-', regex=True)\ .astype(float)
sample notebook (github.com/KamarajuKusumanchi)
convert int64 YYYYMMDD to datetime64
df['date'] = pd.to_datetime(df['date'], format='%Y%m%d')
convert all values in a dataframe column to lowercase
% python3 Python 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170118] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> import numpy as np >>> df = pd.DataFrame({'a':['K', 'a', 'M', np.nan, 'A'], 'b': ['R', 'a', 'J', 'u', np.nan]}) >>> df a b 0 K R 1 a a 2 M J 3 NaN u 4 A NaN >>> df['a'] = df['a'].str.lower() >>> df a b 0 k R 1 a a 2 m J 3 NaN u 4 a NaN
convert the column names of a dataframe to lower case
change column of floating point numbers stored as strings to integers
Trying to directly convert something like '1.1' to an integer will throw an error. The trick is to first convert the strings to floating point numbers and then convert them to integers.
$ ipython In [1]: import pandas as pd ...: data = {'id': ['a', 'b', 'c', 'd'], 'price': ['1.1', '2.8', '3.5', '4.5']} ...: df = pd.DataFrame(data) ...: print(df) id price 0 a 1.1 1 b 2.8 2 c 3.5 3 d 4.5 In [2]: df.dtypes Out[2]: id object price object dtype: object In [3]: df.price = df.price.astype('float') ...: df.price = df.price.astype('int') ...: df.dtypes Out[3]: id object price int32 dtype: object In [4]: print(df) id price 0 a 1 1 b 2 2 c 3 3 d 4
change a column of strings to floating point
$ ipython In [1]: import pandas as pd ...: data = {'id': ['a', 'b'], 'price': ['1.1', '2.2']} ...: df = pd.DataFrame(data) ...: print(df) id price 0 a 1.1 1 b 2.2 In [2]: df.dtypes Out[2]: id object price object dtype: object In [3]: df.price = df.price.astype('float') ...: df.dtypes Out[3]: id object price float64 dtype: object In [4]: print(df) id price 0 a 1.1 1 b 2.2
add columns
populate membership column
$ipython In [1]: import pandas as pd ...: df1 = pd.DataFrame({'days':['sun', 'mon', 'tue', 'wed', 'thu', 'fri', 'sat']}) ...: df1 Out[1]: days 0 sun 1 mon 2 tue 3 wed 4 thu 5 fri 6 sat In [2]: df2 = pd.DataFrame({'data':['fri', 'foo', 'bar', 'mon', 'thu']}) ...: df2 Out[2]: data 0 fri 1 foo 2 bar 3 mon 4 thu In [3]: df2['is_day'] = df2['data'].isin(df1['days']) ...: df2 Out[3]: data is_day 0 fri True 1 foo False 2 bar False 3 mon True 4 thu True
add empty column to an empty dataframe
$ ipython In [1]: import pandas as pd ...: df = pd.DataFrame(columns=['protein', 'fat']) ...: print(df) Empty DataFrame Columns: [protein, fat] Index: [] In [2]: df['fiber'] = '' ...: print(df) Empty DataFrame Columns: [protein, fat, fiber] Index: []
dummy
String for SQL where clause
call a function on each row of a dataframe
tags | row by row
If the function only requires values in a single column of the dataframe
df['bar'] = df['foo'].apply(lambda x: MyGreatFunc(arg1, x, arg3))
If you want to call a function that takes arguments from a row of a dataframe and repeat that for each row in the dataframe, see https://stackoverflow.com/questions/39814416/pandas-apply-with-args-which-are-dataframe-row-entries . Sample code
import pandas as pd df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6]}) print (df) A B 0 1 4 1 2 5 2 3 6 def myfunction(B, A): # do some stuff result = B + A # do something here to get the result return result df['C'] = df.apply(lambda x: myfunction(x.B, x.A), axis=1) print (df) A B C 0 1 4 5 1 2 5 7 2 3 6 9
or
def myfunction(x): result = x.B + x.A # do something here to get the result return result df['C'] = df.apply(myfunction, axis=1) print (df) A B C 0 1 4 5 1 2 5 7 2 3 6 9
pretty print dataframe without index
df.to_string(index=False)
The default is to print the index
df.to_string()
For example
>>> import pandas as pd >>> a = [2, -3, 4]; b = ['a', 'b', 'c']; c = [7, 4, 1] >>> df = pd.DataFrame({'a':a, 'b':b, 'c':c}) >>> print(df) a b c 0 2 a 7 1 -3 b 4 2 4 c 1 >>> print(df.to_string(index=False)) a b c 2 a 7 -3 b 4 4 c 1
Ref:- https://stackoverflow.com/questions/24644656/how-to-print-dataframe-without-index
print index and column names on the same row
Use
print(df.reset_index().to_string(index=False))
For example
In [1]: import numpy as np ...: import pandas as pd ...: df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6]]),columns=['a','b','c']) ...: df.set_index('a', inplace=True) ...: df Out[1]: b c a 1 2 3 4 5 6 In [2]: print(df.reset_index().to_string(index=False)) a b c 1 2 3 4 5 6
Ref:- https://stackoverflow.com/questions/43635706/pandas-index-title-in-line-with-column-headers/
print either to file or to stdout
df.to_csv(out_file if out_file else sys.stdout, index=False)
print a newline before printing dataframe in the logger
logger.info('df is\n' + df.to_string())
Ref:- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_string.html
print all values in a pandas series
- foo.to_csv(sys.stdout, index=False)
- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_csv.html
print value in last row of a column
print('As of {date}'.format(date=df.iloc[-1]['Date Collected']))
working example | https://github.com/KamarajuKusumanchi/market_data_processor/blob/master/corona_virus/specimens.py
give a name to the column index
df.index.name = 'foo'
Ref:- https://stackoverflow.com/questions/18022845/pandas-index-column-title-or-name
Sort by absolute value
Sample notebook - nbviewer.jupyter.org/github/KamarajuKusumanchi
uses | argsort, iloc
Sort on date strings
sample notebook (github.com/KamarajuKusumanchi)
task | Sort a data frame based on a column whose values are strings of the form "abbreviated_month_name dd YYYYY"(ex:- "Aug 25 2016").
uses | pd.to_datetime, DataFrame.sort_values
extract first 8 characters of a column in a dataframe
>>> a['Date'] 0 20160201.0 1 20160201.0 2 20160201.0 3 20160104.0 4 20160104.0 5 20160104.0 6 20161201.0 7 20161201.0 8 20161201.0 Name: Date, dtype: object >>> a['Date'].str[:8] 0 20160201 1 20160201 2 20160201 3 20160104 4 20160104 5 20160104 6 20161201 7 20161201 8 20161201 Name: Date, dtype: object
iterate over each column of a dataframe except one
cols = df.columns.tolist() cols.remove('foo') for col in cols: // do something
Iterate over each month
import pandas as pd from pandas.tseries.offsets import * for end_dt in pd.date_range('20160110', '20160920', freq='M'): begin_dt = end_dt + MonthBegin(n=-1) end_dt_yyyymmdd = end_dt.strftime('%Y%m%d') begin_dt_yyyymmdd = begin_dt.strftime('%Y%m%d') print(begin_dt_yyyymmdd, end_dt_yyyymmdd)
will produce
20160101 20160131 20160201 20160229 20160301 20160331 20160401 20160430 20160501 20160531 20160601 20160630 20160701 20160731 20160801 20160831
Using
pd.date_range('20160110', '20160930', freq='M')
will produce
20160101 20160131 20160201 20160229 20160301 20160331 20160401 20160430 20160501 20160531 20160601 20160630 20160701 20160731 20160801 20160831 20160901 20160930
Iterate over each quarter
import pandas as pd from pandas.tseries.offsets import * for end_dt in pd.date_range('20140101', '20160930', freq='Q')[::-1]: begin_dt = end_dt + MonthBegin(n=-3) end_dt_yyyymmdd = end_dt.strftime('%Y%m%d') begin_dt_yyyymmdd = begin_dt.strftime('%Y%m%d') print(begin_dt_yyyymmdd, end_dt_yyyymmdd)
will produce
20160701 20160930 20160401 20160630 20160101 20160331 20151001 20151231 20150701 20150930 20150401 20150630 20150101 20150331 20141001 20141231 20140701 20140930 20140401 20140630 20140101 20140331
iterate over each row of a dataframe
To iterate over each row of a dataframe, it is better to use DataFrame.itertuples() over DataFrame.iterrows() as explained in https://stackoverflow.com/a/41022840/6305733
itertuples is faster, preserves dtypes across the rows.
for row in df.itertuples(index=True, name='Pandas'): print getattr(row, "c1"), getattr(row, "c2")
tags | row by row
groupby iterate in sorted order
By default group keys are sorted. Use sort=False to disable sorting on group keys.
grouped = df.groupby(by=['A'], sort=False)
Example:- sample notebook (github.com/KamarajuKusumanchi)
Ref:- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html
using itertuples
DataFrame.itertuples() can be used to iterate over DataFrame rows as namedtuples, with index as first element of the tuple.
>>> import pandas as pd >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]}, index=['a', 'b']) >>> df col1 col2 a 1 0.1 b 2 0.2 >>> for row in df.itertuples(): ... print(row) ... Pandas(Index='a', col1=1, col2=0.10000000000000001) Pandas(Index='b', col1=2, col2=0.20000000000000001)
To print just the first element instead of printing all the elements
>>> g = df.itertuples() >>> next(g, 'default') Pandas(Index='a', col1=1, col2=0.10000000000000001)
Subsequent calls will print the next element or the default value if there are no elements left.
>>> next(g, 'default') Pandas(Index='b', col1=2, col2=0.20000000000000001) >>> next(g, 'default') 'default'
If the 'default' is not supplied, it will throw a StopIteration exception when there are no elements left.
>>> g = df.itertuples() >>> next(g) Pandas(Index='a', col1=1, col2=0.10000000000000001) >>> next(g) Pandas(Index='b', col1=2, col2=0.20000000000000001) >>> next(g) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
Extract some columns from a data frame and make a copy
One approach
new = old[['A', 'C', 'D']].copy()
Another approach is to use the filter function, which will create a copy by default:
new = old.filter(['A', 'B', 'D'])
The default is to filter by columns (axis=1). To filter by rows, use axis=0. For example:
new = old.filter(['foo', 'bar'], axis=0)
Conditional assignment
>>> import pandas as pd >>> df = pd.DataFrame({'letters': ['a', 'b', 'c', 'd'], 'numbers': [1,2,3,4]}) >>> df letters numbers 0 a 1 1 b 2 2 c 3 3 d 4 >>> df['new'] = 'default' >>> df letters numbers new 0 a 1 default 1 b 2 default 2 c 3 default 3 d 4 default >>> df.loc[df['numbers'] > 2.5, 'new'] = 'b+' >>> df letters numbers new 0 a 1 default 1 b 2 default 2 c 3 b+ 3 d 4 b+
filter rows by conditions
Capture rows by conditioning on two columns
mask_foo = (df['foo'] == 'FOO') & (df['bar'] >= 100) mask_sec = (df['foo'] == 'SEC') & (df['bar'] >= 500) df2 = df[ (mask_foo | mask_sec) ]
Remove null entries and everything less than 500 in column foo
mask_foo = (pd.isnull(df['foo'])) | \ (df['foo'] < 500) df2 = df[~ mask]
Show rows where values in one column are missing and values in a different column equals something
df[ (pd.isnull(df['foo'])) & (df['bar'] == 'baz')]
tags | logical, missing values
delete columns in a dataframe
To delete one column
df = df.drop('column_name', 1)
where 1 is the axis number (0 for rows and 1 for columns.). The default is 0.
To delete a column in place
df.drop('column_name', axis=1, inplace=True)
To delete multiple columns
df = df.drop(['foo', 'bar'], 1)
To delete columns by number, e.g. the 1st, 2nd and 4th columns:
df.drop(df.columns[[0, 1, 3]], axis=1) # df.columns is zero-based pd.Index
Ref:-
- See LondonRob reply in http://stackoverflow.com/questions/13411544/delete-column-from-pandas-dataframe
tags | drop columns, remove columns
Joining two dataframes in pandas
To merge data frames a and b on column 'foo' and store the result in a new data frame, m
import pandas as pd ... m = pd.merge(a, b, on='foo')
Ref:-
- http://pandas.pydata.org/pandas-docs/stable/merging.html - The merge function can do lot of things. This link covers all of that.
Inner join multiple datraframes
Sample code to merge multiple dataframes on a bunch of columns and then renaming the columns.
cols=['foo', 'bar'] df = df1\ .merge(df2, on=cols)\ .merge(df3, on=cols)\ .merge(df4, on=cols)\ .rename(columns={'foo':'alpha', 'bar':'beta'})
Note:- df1 is not changed when you apply a merge on it.
Inner join two dataframes
df1 = pd.DataFrame({'a': [1,1,2,2,3,3], 'b':[0,1,2,3,4,5]}) df2 = pd.DataFrame({'a': [1,2,3], 'c':[2,4,6]}) print df1 print df2 df3 = df1.merge(df2, how='inner', on=['a']) print df3
a b 0 1 0 1 1 1 2 2 2 3 2 3 4 3 4 5 3 5 a c 0 1 2 1 2 4 2 3 6 a b c 0 1 0 2 1 1 1 2 2 2 2 4 3 2 3 4 4 3 4 6 5 3 5 6
Inner joining two dataframes
Note that when two dataframes are inner joined, the resulting dataframe can potentially be larger than both data frames. This can happen if there are multiple rows in either data frame over the "joint" columns. For example, consider
>>> import pandas as pd >>> df1 = pd.DataFrame([[1, 3], [1, 4]], columns=['A', 'B']) >>> df1 A B 0 1 3 1 1 4 >>> df2 = pd.DataFrame([[1, 5], [1, 6]], columns=['A', 'C']) >>> df2 A C 0 1 5 1 1 6 >>> df3 = pd.merge(df1, df2, on='A', how='inner') >>> df3 A B C 0 1 3 5 1 1 3 6 2 1 4 5 3 1 4 6
which shows 4 rows in df3 even though it was created by inner joining two data frames that each have 2 rows.
If the duplicates are not expected, try cleaning the data using pd.drop_duplicates()
Ref:- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html
using pandas.DataFrame.merge
The pandas.DataFrame.merge does not overwrite the dataframe it operates on. To do that use
df = df.merge(right)
Ref:- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html
Append two dataframes
>>> import pandas as pd >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) >>> df A B 0 1 2 1 3 4 >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) >>> df2 A B 0 5 6 1 7 8
>>> df_merge = df.append(df2, ignore_index=True) >>> df_merge A B 0 1 2 1 3 4 2 5 6 3 7 8
This will not modify df, df2.
By default, duplicates are not eliminated. Use drop_duplicates() for that.
>>> df3 = pd.DataFrame([[5, 6], [7, 8], [3, 4]], columns=list('AB')) >>> df3 A B 0 5 6 1 7 8 2 3 4 >>> df_merge = df.append(df3, ignore_index=True) >>> df_merge A B 0 1 2 1 3 4 2 5 6 3 7 8 4 3 4 >>> df_merge = df.append(df3, ignore_index=True).drop_duplicates() >>> df_merge A B 0 1 2 1 3 4 2 5 6 3 7 8
Missing entries will be filled by NaN.
>>> df4 = pd.DataFrame([[5, 6, 7], [7, 8, 9]], columns=list('ABC')) >>> df4 A B C 0 5 6 7 1 7 8 9 >>> df_merge = df.append(df4, ignore_index=True).drop_duplicates() >>> df_merge A B C 0 1 2 NaN 1 3 4 NaN 2 5 6 7.0 3 7 8 9.0
You can also use pd.concat()
>>> df_merge = pd.concat([df, df3], ignore_index=True).drop_duplicates() >>> df_merge A B 0 1 2 1 3 4 2 5 6 3 7 8
Append array of dataframes
master = pd.concat([pd.read_csv(file) for file in files])
Dump dataframe to a gzip file
https://github.com/KamarajuKusumanchi/sampleusage/blob/master/python/pandas/df_to_gzip.py
get duplicates
% python3 Python 3.5.3rc1 (default, Jan 3 2017, 04:40:57) [GCC 6.3.0 20161229] on linux >>> import pandas as pd >>> a = pd.DataFrame({'isp': ['comcast', 'telmex', 'comcast'], 'country' : ['us', 'mexico', 'us']}) >>> a country isp 0 us comcast 1 mexico telmex 2 us comcast >>> a[a.duplicated()] country isp 2 us comcast
Ref:- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html
drop duplicates
>>> import pandas as pd >>> df = pd.DataFrame([[1,2], [3,4], [5, 6], [7, 8], [5, 4]], columns=list('AB')) >>> df A B 0 1 2 1 3 4 2 5 6 3 7 8 4 5 4 >>> df.drop_duplicates() A B 0 1 2 1 3 4 2 5 6 3 7 8 4 5 4 >>> df.drop_duplicates(subset=["B"]) A B 0 1 2 1 3 4 2 5 6 3 7 8 >>> df.drop_duplicates(subset=["B"], keep='last') A B 0 1 2 2 5 6 3 7 8 4 5 4 >>> df.drop_duplicates(subset=["B"], keep=False) A B 0 1 2 2 5 6 3 7 8
Ref:- http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.drop_duplicates.html
drop duplicate columns with different column names
% python3 Python 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170118] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [1,2,3]}) >>> df a b c 0 1 4 1 1 2 5 2 2 3 6 3 >>> df2 = df.T.drop_duplicates().T >>> df2 a b 0 1 4 1 2 5 2 3 6
Note:- The following does not work since the duplicate columns do not have the same name.
>>> df3 = df.loc[:,~df.columns.duplicated()] >>> df3 a b c 0 1 4 1 1 2 5 2 2 3 6 3
columns in A but not in B
Use df1.columns.difference(df2.columns)
% ipython Python 3.8.5 (default, Sep 4 2020, 07:30:14) Type 'copyright', 'credits' or 'license' for more information IPython 7.18.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd df1 = pd.DataFrame({ 'A' : [1.0, 2.0, 3.0, 4.0], 'B' : [100, 200, 300, 400], 'C' : [2, 3, 4, 5] }) df2 = pd.DataFrame({ 'B' : [1.0, 2.0, 3.0, 4.0], 'C' : [100, 200, 300, 400], 'D' : [2, 3, 4, 5] }) In [2]: print(df1) A B C 0 1.0 100 2 1 2.0 200 3 2 3.0 300 4 3 4.0 400 5 In [3]: print(df2) B C D 0 1.0 100 2 1 2.0 200 3 2 3.0 300 4 3 4.0 400 5 In [4]: print(df1.columns) print(df2.columns) Index(['A', 'B', 'C'], dtype='object') Index(['B', 'C', 'D'], dtype='object') In [5]: print(type(df1.columns)) <class 'pandas.core.indexes.base.Index'> In [6]: df1.columns.difference(df2.columns) Out[6]: Index(['A'], dtype='object') In [7]: df2.columns.difference(df1.columns) Out[7]: Index(['D'], dtype='object') In [8]: df1.columns.intersection(df2.columns) Out[8]: Index(['B', 'C'], dtype='object') In [9]: df1.columns.union(df2.columns) Out[9]: Index(['A', 'B', 'C', 'D'], dtype='object') In [10]: df1.columns.symmetric_difference(df2.columns) Out[10]: Index(['A', 'D'], dtype='object') In [11]: df1.columns & df2.columns Out[11]: Index(['B', 'C'], dtype='object') In [12]: df1.columns | df2.columns Out[12]: Index(['A', 'B', 'C', 'D'], dtype='object') In [13]: df1.columns ^ df2.columns Out[13]: Index(['A', 'D'], dtype='object')
common and non-common columns between two dataframes
In [1]: import pandas as pd In [9]: idx1 = pd.Index(['a3', 'a2', 'a1']) idx2 = pd.Index(['a5', 'a3', 'a4']) print(idx1) print(idx2) Out [9]: Index([u'a3', u'a2', u'a1'], dtype='object') Index([u'a5', u'a3', u'a4'], dtype='object') In [10]: idx1.intersection(idx2) Out[10]: Index([u'a3'], dtype='object') In [12]: idx2.difference(idx1) Out[12]: Index([u'a4', u'a5'], dtype='object')
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.intersection.html - preserves the order of the calling index
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html - result is sorted if sorting is possible.
remove non digit characters
cols = ['foo', 'bar'] df[cols] = df[cols].replace(to_replace='[^0-9]', value='', regex=True)
replace empty strings with zero string
cols = ['foo', 'bar'] df[cols] = df[cols].replace(to_replace='', value='0', regex=True)
replace character in pandas column
google search | apply regex to a column
To replace in a column
df['foo'] = df['foo'].replace(to_replace='‡', value='', regex=True)
To replace in the entire dataframe
df = df.replace(to_replace='‡', value='', regex=True)
Working example | https://github.com/KamarajuKusumanchi/market_data_processor/blob/master/corona_virus/specimens.py
Ref:-
conditional replacement of values
To replace values in a dataframe column when some constraints are satisfied, use
df.loc[mask, col] = value
See | https://stackoverflow.com/questions/21608228/conditional-replace-pandas
Example:
$ ipython Python 2.7.13 |Anaconda custom (64-bit)| (default, Dec 19 2016, 13:29:36) [MSC v.1500 64 bit (AMD64)] IPython 5.1.0 -- An enhanced Interactive Python. In [1]: import pandas as pd ...: import numpy as np ...: np.random.seed(42) ...: df = pd.DataFrame(np.random.randn(8,4)) ...: print(df) 0 1 2 3 0 0.496714 -0.138264 0.647689 1.523030 1 -0.234153 -0.234137 1.579213 0.767435 2 -0.469474 0.542560 -0.463418 -0.465730 3 0.241962 -1.913280 -1.724918 -0.562288 4 -1.012831 0.314247 -0.908024 -1.412304 5 1.465649 -0.225776 0.067528 -1.424748 6 -0.544383 0.110923 -1.150994 0.375698 7 -0.600639 -0.291694 -0.601707 1.852278 In [2]: mask = df[1]>0 ...: df.loc[mask, 1] = np.nan ...: print(df) 0 1 2 3 0 0.496714 -0.138264 0.647689 1.523030 1 -0.234153 -0.234137 1.579213 0.767435 2 -0.469474 NaN -0.463418 -0.465730 3 0.241962 -1.913280 -1.724918 -0.562288 4 -1.012831 NaN -0.908024 -1.412304 5 1.465649 -0.225776 0.067528 -1.424748 6 -0.544383 NaN -1.150994 0.375698 7 -0.600639 -0.291694 -0.601707 1.852278 In [3]: mask = df[2] > 0 ...: df.loc[mask, 3] = np.nan ...: print(df) 0 1 2 3 0 0.496714 -0.138264 0.647689 NaN 1 -0.234153 -0.234137 1.579213 NaN 2 -0.469474 NaN -0.463418 -0.465730 3 0.241962 -1.913280 -1.724918 -0.562288 4 -1.012831 NaN -0.908024 -1.412304 5 1.465649 -0.225776 0.067528 NaN 6 -0.544383 NaN -1.150994 0.375698 7 -0.600639 -0.291694 -0.601707 1.852278 In [4]: mask = ~df[3].isnull() ...: df.loc[mask, 2] = -9.999999 ...: print(df) 0 1 2 3 0 0.496714 -0.138264 0.647689 NaN 1 -0.234153 -0.234137 1.579213 NaN 2 -0.469474 NaN -9.999999 -0.465730 3 0.241962 -1.913280 -9.999999 -0.562288 4 -1.012831 NaN -9.999999 -1.412304 5 1.465649 -0.225776 0.067528 NaN 6 -0.544383 NaN -9.999999 0.375698 7 -0.600639 -0.291694 -9.999999 1.852278
mean response when predictor is nonzero
Consider
% cat train.csv y,X0,X1,X2 4.8,a,0,1 8.8,a,1,1 7.6,b,0,1 8.1,b,1,1 7.8,b,0,0 9.3,c,1,0
where y is the response variable and X0, X1 and X2 are predictors. X1 and X2 are binary predictors (meaning they can either be 0 or 1), X0 is a categorical variable that can take values a, b, c. The idea here is to find the mean of the response variable when the predictor is true. For X0, we want to find the mean for each category.
Sample code
import pandas as pd df_raw = pd.read_csv("train.csv") print(df_raw) df = pd.get_dummies(df_raw) print(df) ycol = 'y' xcols = ['X1', 'X2', 'X0_a', 'X0_b', 'X0_c'] response = pd.DataFrame(columns=xcols, index=['mean', 'std', 'score']) for xcol in xcols: mean_value = df.loc[df[xcol] == 1, ycol].mean() std_value = df.loc[df[xcol] == 1, ycol].std() score = mean_value/std_value response.loc['mean', xcol] = mean_value response.loc['std', xcol] = std_value response.loc['score', xcol] = score print(response)
Sample output
y X0 X1 X2 0 4.8 a 0 1 1 8.8 a 1 1 2 7.6 b 0 1 3 8.1 b 1 1 4 7.8 b 0 0 5 9.3 c 1 0 y X1 X2 X0_a X0_b X0_c 0 4.8 0 1 1 0 0 1 8.8 1 1 1 0 0 2 7.6 0 1 0 1 0 3 8.1 1 1 0 1 0 4 7.8 0 0 0 1 0 5 9.3 1 0 0 0 1 X1 X2 X0_a X0_b X0_c mean 8.73333 7.325 6.8 7.83333 9.3 std 0.602771 1.75381 2.82843 0.251661 NaN score 14.4886 4.17663 2.40416 31.1265 NaN
Move column to another position
sample code (github.com/KamarajuKusumanchi)
tags | move column to the beginning
demonstrates | using reindex, list insert
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html
- https://stackoverflow.com/questions/25122099/move-column-by-name-to-front-of-table-in-pandas
- https://developers.google.com/edu/python/lists
Combine two dataframes by appending columns
df_all = pd.concat([df1, df2], axis=1)
Ref:- "Concatenating objects" section in https://pandas.pydata.org/pandas-docs/stable/merging.html
combine and separate columns
The idea here is to combine two columns of a dataframe into a tuple column and subsequently break it into separate columns.
Consider the following dataframe
% python3 Python 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170118] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> df = pd.DataFrame({'item':['item A', 'item B', 'item B', 'item C', 'item A'], 'value':[59, 95, 82, 40, 11]}) >>> >>> df item value 0 item A 59 1 item B 95 2 item B 82 3 item C 40 4 item A 11
Combine the columns into a tuple and add it as a another column
>>> df['item_value'] = list(zip(df.item, df.value)) >>> df item value item_value 0 item A 59 (item A, 59) 1 item B 95 (item B, 95) 2 item B 82 (item B, 82) 3 item C 40 (item C, 40) 4 item A 11 (item A, 11)
To unpack the tuple into a new dataframe
>>> df2 = df['item_value'].apply(pd.Series) >>> df2 0 1 0 item A 59 1 item B 95 2 item B 82 3 item C 40 4 item A 11
To rename the columns while unpacking
>>> df2 = df['item_value'].apply(pd.Series).rename(columns={0:'new_item', 1:'new_value'}) >>> df2 new_item new_value 0 item A 59 1 item B 95 2 item B 82 3 item C 40 4 item A 11
Until now, the original dataframe is not changed.
>>> df item value item_value 0 item A 59 (item A, 59) 1 item B 95 (item B, 95) 2 item B 82 (item B, 82) 3 item C 40 (item C, 40) 4 item A 11 (item A, 11)
To unpack the tuples into the original dataframe itself
>>> df[['new_item', 'new_value']] = df['item_value'].apply(pd.Series) >>> df item value item_value new_item new_value 0 item A 59 (item A, 59) item A 59 1 item B 95 (item B, 95) item B 95 2 item B 82 (item B, 82) item B 82 3 item C 40 (item C, 40) item C 40 4 item A 11 (item A, 11) item A 11
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html - rename columns in a dataframe
- https://stackoverflow.com/questions/29550414/how-to-split-column-of-tuples-in-pandas-dataframe - unpacking a tuple column to separate columns
- https://stackoverflow.com/questions/16031056/how-to-form-tuple-column-from-two-columns-in-pandas - combine two dataframe columns into a tuple
Remove columns that are all zero
memory used by a dataframe
df.values.nbytes + df.index.nbytes + df.columns.nbytes
Sample usage:
In [1]: import pandas as pd ...: import numpy as np ...: df = pd.DataFrame(np.random.randn(5,3)) ...: df Out[1]: 0 1 2 0 -1.423967 0.798794 -1.144020 1 -1.667792 0.772769 1.161315 2 -1.430745 -0.573701 0.074876 3 -0.673812 0.534825 -0.934246 4 -1.773546 1.293830 1.113970 In [2]: df.values.nbytes Out[2]: 120 In [3]: df.index.nbytes Out[3]: 80 In [4]: df.columns.nbytes Out[4]: 80 In [5]: df.values.nbytes + df.index.nbytes + df.columns.nbytes Out[5]: 280
using to_timedelta function
>>> import pandas as pd >>> from datetime import datetime >>> dt = '20171103' >>> offset = [-4, 3, 1] >>> df = pd.DataFrame({'offset':offset}) >>> print(df) offset 0 -4 1 3 2 1 >>> >>> df['date'] = datetime.strptime(dt, '%Y%m%d') + \ ... pd.to_timedelta(df['offset'], 'w') >>> print(df) offset date 0 -4 2017-10-06 1 3 2017-11-24 2 1 2017-11-10
demonstrates | timedelta operations on a column
number of dates between two time series
>>> a[['end_date', 'start_date']].head() end_date start_date 0 2016-09-30 00:00:00.000 2008-02-14 00:00:00 1 2016-09-30 00:00:00.000 2015-01-23 00:00:00 2 2016-09-30 00:00:00.000 2014-09-29 00:00:00 3 2016-09-30 00:00:00.000 2014-09-29 00:00:00 4 2016-09-30 00:00:00.000 2010-09-14 00:00:00 >>> age = (pd.to_datetime(a['end_date']) - pd.to_datetime(a['start_date'])) >>> type(age) <class 'pandas.core.series.Series'> >>> age.head() 0 3151 days 1 616 days 2 732 days 3 732 days 4 2208 days dtype: timedelta64[ns]
To convert it to a number
>>> age = (pd.to_datetime(a['end_date']) - pd.to_datetime(a['start_date']))/np.timedelta64(1, 'D') >>> type(age) <class 'pandas.core.series.Series'> >>> age.head() 0 3151.0 1 616.0 2 732.0 3 732.0 4 2208.0 dtype: float64
count frequency of values in a column
tags | count categories, number of elements in each group, column frequency
>>> df2 Department Lottery Literacy Wealth Region 1 Aisne 38 51 22 N 2 Allier 66 13 61 C 3 Basses-Alpes 80 46 76 E 4 Hautes-Alpes 79 69 83 E 5 Ardeche 70 27 84 S 6 Ardennes 31 67 33 N 7 Ariege 75 18 72 S 8 Aube 28 59 14 E 9 Aude 50 34 17 S >>> df2['Region'].value_counts() S 3 E 3 N 2 C 1
Todo:- What is the type of the return variable?
cumulative sum
>>> import pandas as pd >>> a = [2, -3, 4]; b = ['a', 'b', 'c']; c = [7, 4, 1] >>> df = pd.DataFrame({'a':a, 'b':b, 'c':c}) >>> df a b c 0 2 a 7 1 -3 b 4 2 4 c 1 >>> df['d'] = df['a'].cumsum() >>> df['e'] = df['b'].cumsum() >>> df['f'] = df['c'].cumsum() >>> df a b c d e f 0 2 a 7 2 a 7 1 -3 b 4 -1 ab 11 2 4 c 1 3 abc 12
quantile
tage | inverse cumulative distribution function
$ ipython In [1]: import pandas as pd In [2]: s = pd.Series([3, 1, 2, 4]) In [3]: s.quantile(.5) Out[3]: 2.5 In [4]: s.quantile([.25, .5, .75]) Out[4]: 0.25 1.75 0.50 2.50 0.75 3.25 dtype: float64
Ref:
- https://en.wikipedia.org/wiki/Quantile_function
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.quantile.html
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.window.Rolling.quantile.html#pandas.core.window.Rolling.quantile
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.quantile.html#pandas.Series.quantile
add a sequence of numbers as a column to dataframe
In [49]: df Out[49]: y 0 169.91 1 265.32 2 158.53 3 160.87 4 167.45 5 158.23 6 165.52 7 155.62 In [50]: df['rownum'] = range(1, df.shape[0]+1) In [51]: df Out[51]: y rownum 0 169.91 1 1 265.32 2 2 158.53 3 3 160.87 4 4 167.45 5 5 158.23 6 6 165.52 7 7 155.62 8 In [52]: df.drop('rownum', axis=1, inplace=True) In [53]: df Out[53]: y 0 169.91 1 265.32 2 158.53 3 160.87 4 167.45 5 158.23 6 165.52 7 155.62
unsorted
- To print all column names in a data frame - df.columns.values
- Number of missing values in a dataframe - df.isnull().sum()
experiment with get_dummies
>>> import pandas as pd >>> df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'a', 'c'], 'C': [1, 2, 3]}) >>> df A B C 0 a b 1 1 b a 2 2 a c 3 >>> pd.get_dummies(df) C A_a A_b B_a B_b B_c 0 1 1.0 0.0 0.0 1.0 0.0 1 2 0.0 1.0 1.0 0.0 0.0 2 3 1.0 0.0 0.0 0.0 1.0 >>> pd.get_dummies(df, columns=['A']) B C A_a A_b 0 b 1 1.0 0.0 1 a 2 0.0 1.0 2 c 3 1.0 0.0 >>> pd.get_dummies(df, columns=['B']) A C B_a B_b B_c 0 a 1 0.0 1.0 0.0 1 b 2 1.0 0.0 0.0 2 a 3 0.0 0.0 1.0 >>> pd.get_dummies(df, columns=['A', 'B']) C A_a A_b B_a B_b B_c 0 1 1.0 0.0 0.0 1.0 0.0 1 2 0.0 1.0 1.0 0.0 0.0 2 3 1.0 0.0 0.0 0.0 1.0 >>> pd.get_dummies(df, columns=['B', 'A']) C B_a B_b B_c A_a A_b 0 1 0.0 1.0 0.0 1.0 0.0 1 2 1.0 0.0 0.0 0.0 1.0 2 3 0.0 0.0 1.0 1.0 0.0
Ref:- http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.get_dummies.html
using at with multi index
df.[(key1, key2), 'col1']
hierarchical groupby
Consider the dataframe
d1 = pd.DataFrame( {'StudentID': ["x1", "x10", "x2","x3", "x4", "x5", "x6", "x7", "x8", "x9"], 'StudentGender' : ['F', 'M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'M'], 'ExamenYear': ['2007','2007','2007','2008','2008','2008','2008','2009','2009','2009'], 'Exam': ['algebra', 'stats', 'bio', 'algebra', 'algebra', 'stats', 'stats', 'algebra', 'bio', 'bio'], 'Participated': ['no','yes','yes','yes','no','yes','yes','yes','yes','yes'], 'Passed': ['no','yes','yes','yes','no','yes','yes','yes','no','yes']}, columns = ['StudentID', 'StudentGender', 'ExamenYear', 'Exam', 'Participated', 'Passed']) print d1
StudentID StudentGender ExamenYear Exam Participated Passed 0 x1 F 2007 algebra no no 1 x10 M 2007 stats yes yes 2 x2 F 2007 bio yes yes 3 x3 M 2008 algebra yes yes 4 x4 F 2008 algebra no no 5 x5 M 2008 stats yes yes 6 x6 F 2008 stats yes yes 7 x7 M 2009 algebra yes yes 8 x8 M 2009 bio yes no 9 x9 M 2009 bio yes yes
and the function
def ZahlOccurence_0(x): return pd.Series({'All': len(x['StudentID']), 'Part': sum(x['Participated'] == 'yes'), 'Pass' : sum(x['Passed'] == 'yes')})
We can do groupby at multiple levels and add the results
t1 = d1.groupby(['ExamenYear', 'Exam']).apply(ZahlOccurence_0) t2 = d1.groupby('ExamenYear').apply(ZahlOccurence_0) print t1 print t2 t3 = pd.concat([t1.reset_index(), t2.reset_index()], ignore_index=True) print t3 t4 = t3.set_index(['ExamenYear', 'Exam']) print t4
All Part Pass ExamenYear Exam 2007 algebra 1 0 0 bio 1 1 1 stats 1 1 1 2008 algebra 2 1 1 stats 2 2 2 2009 algebra 1 1 1 bio 2 2 1 All Part Pass ExamenYear 2007 3 2 2 2008 4 3 3 2009 3 3 2 All Exam ExamenYear Part Pass 0 1 algebra 2007 0 0 1 1 bio 2007 1 1 2 1 stats 2007 1 1 3 2 algebra 2008 1 1 4 2 stats 2008 2 2 5 1 algebra 2009 1 1 6 2 bio 2009 2 1 7 3 NaN 2007 2 2 8 4 NaN 2008 3 3 9 3 NaN 2009 3 2 All Part Pass ExamenYear Exam 2007 algebra 1 0 0 bio 1 1 1 stats 1 1 1 2008 algebra 2 1 1 stats 2 2 2 2009 algebra 1 1 1 bio 2 2 1 2007 NaN 3 2 2 2008 NaN 4 3 3 2009 NaN 3 3 2
When aggregating over all Exams for a given year, we can show a meaningful text instead of NaN.
t1 = d1.groupby(['ExamenYear', 'Exam']).apply(ZahlOccurence_0) t2 = d1.groupby('ExamenYear').apply(ZahlOccurence_0).assign(Exam='All').reset_index().set_index(['ExamenYear', 'Exam']) print t1 print t2 t3 = pd.concat([t1.reset_index(), t2.reset_index()], ignore_index=True) print t3 t4 = t3.set_index(['ExamenYear', 'Exam']) print t4
All Part Pass ExamenYear Exam 2007 algebra 1 0 0 bio 1 1 1 stats 1 1 1 2008 algebra 2 1 1 stats 2 2 2 2009 algebra 1 1 1 bio 2 2 1 All Part Pass ExamenYear Exam 2007 All 3 2 2 2008 All 4 3 3 2009 All 3 3 2 ExamenYear Exam All Part Pass 0 2007 algebra 1 0 0 1 2007 bio 1 1 1 2 2007 stats 1 1 1 3 2008 algebra 2 1 1 4 2008 stats 2 2 2 5 2009 algebra 1 1 1 6 2009 bio 2 2 1 7 2007 All 3 2 2 8 2008 All 4 3 3 9 2009 All 3 3 2 All Part Pass ExamenYear Exam 2007 algebra 1 0 0 bio 1 1 1 stats 1 1 1 2008 algebra 2 1 1 stats 2 2 2 2009 algebra 1 1 1 bio 2 2 1 2007 All 3 2 2 2008 All 4 3 3 2009 All 3 3 2
To make the report hierarchical, we can assemble it by adding "All" Exam rows in between instead of at the end.
t1 = d1.groupby(['ExamenYear', 'Exam']).apply(ZahlOccurence_0) t2 = d1.groupby('ExamenYear').apply(ZahlOccurence_0).assign(Exam='All').reset_index().set_index(['ExamenYear', 'Exam']) print t1 print t2 t1_group = t1.groupby(level=0) t2_group = t2.groupby(level=0) a=[] for (i,j) in t1_group: a.append(t1_group.get_group(i).reset_index()) a.append(t2_group.get_group(i).reset_index()) t3 = pd.concat(a, ignore_index=True).set_index(['ExamenYear', 'Exam']) print t3
All Part Pass ExamenYear Exam 2007 algebra 1 0 0 bio 1 1 1 stats 1 1 1 2008 algebra 2 1 1 stats 2 2 2 2009 algebra 1 1 1 bio 2 2 1 All Part Pass ExamenYear Exam 2007 All 3 2 2 2008 All 4 3 3 2009 All 3 3 2 All Part Pass ExamenYear Exam 2007 algebra 1 0 0 bio 1 1 1 stats 1 1 1 All 3 2 2 2008 algebra 2 1 1 stats 2 2 2 All 4 3 3 2009 algebra 1 1 1 bio 2 2 1 All 3 3 2
tags | join dataframes with with different level of indices, concat dataframes with different index levels, append data, at the end of each group. concat dataframe at each group level, multiindex iterate on groups, method chaining assign variable name
sampling on a groupby object
In [1]: import pandas as pd ...: df = pd.DataFrame({'a': [1,2,3,4,5,6,7,8,9], 'b': [1,1,1,0,0,0,0,2,2]}) ...: print(df) a b 0 1 1 1 2 1 2 3 1 3 4 0 4 5 0 5 6 0 6 7 0 7 8 2 8 9 2 In [2]: grouped = df.groupby('b') ...: smp = grouped.apply(lambda x: x.sample(2)).reset_index(drop=True) ...: print(smp) a b 0 6 0 1 4 0 2 1 1 3 2 1 4 9 2 5 8 2
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html
- https://stackoverflow.com/questions/36390406/pandas-sample-each-group-after-groupby
Note:- Below is the output without resetting the index
In [2]: grouped = df.groupby('b') ...: df2 = grouped.apply(lambda x: x.sample(2)) ...: print(df2) a b b 0 4 5 0 6 7 0 1 0 1 1 2 3 1
To include only those groups that have a minimum number of elements use
In [8]: min_count = 3 ...: grouped = df.groupby('b') ...: smp = grouped.apply(lambda x: x.sample(2) if len(x) >= min_count else None).reset_index(drop=True) ...: print(smp) a b 0 5 0 1 4 0 2 1 1 3 2 1
call function on each group
grouped = df.groupby('column_foo') frames = [] for id, df_id in grouped: new_df_id = df_id.func_bar() frames.append(new_df_id) if not frames: new_df = pd.DataFrame(None) else: new_df = pd.concat(frames) return new_df
tags | groupby call function
Ref:-
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html
- http://pandas.pydata.org/pandas-docs/stable/groupby.html
number of groups in a pandas groupby object
groups = df.groupby('foo') ngroups = len(groups)
[extract groupby object by key]
preserve formatting of columns
set dtype to object to preserve the formatting of the columns. This is useful if we want to dump data after adding or removing certain columns.
df = pd.read_csv(fname, dtype=object)
deprecated
- DataFrame.sort is deprecated. Use sort_values instead.
myscript.py:57: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....) na_position='last')
SettingWithCopyWarning
Consider the following code
dff = df[['foo', 'bar', 'baz']] dff['qux'] = df['qux'] if 'qux' in df else None
It throws a SettingWithCopyWarning saying
A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy dff['floor'] = df['floor'] if 'floor' in df else None
To fix it
dff = df.filter(['foo', 'bar', 'baz']) dff['qux'] = df['qux'] if 'qux' in df else None
read a sheet in excel file
df = pd.read_excel('file.xlsx', 'sheet_name', na_values=['-', 'N/A', 'NA'])
unmerge cells when writing a dataframe
df = pd.read_csv('C:/Users/raju/x/foo.csv') # Set index on the first two columns df.set_index(list(df)[:2], inplace=True) # By default, to_excel will write MultiIndex and Hierarchical Rows # as merged cells. Use merge_cells=False to disable this behaviour. df.to_excel('C:/Users/raju/x/foo.xlsx', sheet_name='myfoo', startrow=1, startcol=1, merge_cells=False)
Ref:- https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.to_excel.html
build a dataframe with unique values from multiple columns
Select the columns of interest and call drop_duplicates() on it.
import pandas as pd import numpy as np df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : np.random.randn(8), 'D' : np.random.randn(8)}) print(df)
A B C D 0 foo one 1.200722 -0.171384 1 bar one -0.662782 -0.226719 2 foo two 0.790387 1.091735 3 bar three 0.615051 -2.474762 4 foo two 0.128955 -0.519028 5 bar two -0.990671 -1.010521 6 foo one 0.299682 -0.220049 7 foo three -0.140584 -1.405962
uniq = df[['A', 'B']].drop_duplicates() print(uniq)
A B 0 foo one 1 bar one 2 foo two 3 bar three 5 bar two 7 foo three
treat zero divided by zero as zero
tags | handle 0 by 0
In [1]: import pandas as pd import numpy as np In [6]: df = pd.DataFrame({'s1': [1.1, 0.5, 0, 0, 4.2, np.nan, np.nan], 's2': [2.2, 0, 0.7, 0, np.nan, 5.6, np.nan]}) print(df) Out [6]: s1 s2 0 1.1 2.2 1 0.5 0.0 2 0.0 0.7 3 0.0 0.0 4 4.2 NaN 5 NaN 5.6 6 NaN NaN In [39]: s1 = df['s1']; s2 = df['s2'] s3 = (s2.fillna(0)/s1.fillna(0) -1) * 100 mask_zero_by_zero = (s1.fillna(0) == 0) & (s2.fillna(0) == 0) s4 = (s2.fillna(0)/s1.fillna(0) -1) * 100 s4[mask_zero_by_zero] = 0.0 df2 = pd.concat((df, pd.DataFrame({'s3':s3, 'mask_zero_by_zero': mask_zero_by_zero, 's4':s4})), axis=1) print df2 Out [39]: s1 s2 mask_zero_by_zero s3 s4 0 1.1 2.2 False 100.000000 100.000000 1 0.5 0.0 False -100.000000 -100.000000 2 0.0 0.7 False inf inf 3 0.0 0.0 True NaN 0.000000 4 4.2 NaN False -100.000000 -100.000000 5 NaN 5.6 False inf inf 6 NaN NaN True NaN 0.000000
enter multiple lines in ipython
Type ctrl+q then Enter
find columns that contain a string
In [1]: import pandas as pd ...: ...: data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]} ...: df = pd.DataFrame(data) ...: print(df) ...: ...: spike_cols = [col for col in df.columns if 'spike' in col] ...: print(list(df.columns)) ...: print(spike_cols) ...: spike-2 hey spke spiked-in no 0 1 4 7 10 1 2 5 8 11 2 3 6 9 12 ['spike-2', 'hey spke', 'spiked-in', 'no'] ['spike-2', 'spiked-in']
reindex from 0 to N
Add a column from 1 to N
df['row'] = np.arange(1, len(df)+1)
df['row'] will be of type np.int
Ref:-
- https://stackoverflow.com/questions/32249960/in-python-pandas-start-row-index-from-1-instead-of-zero-without-creating-additi
- https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html
tags | reset_index and add 1
reorder columns
df = df[col]
Sample usage:
$ ipython In [1]: data = [{'symbol': 'UIS', 'sharesOutstanding': 51013181}, {'symbol': 'AAPL', 'sharesOutstanding': 4829926000}] In [2]: import pandas as pd In [3]: df = pd.DataFrame(data) In [4]: df Out[4]: sharesOutstanding symbol 0 51013181 UIS 1 4829926000 AAPL In [5]: new_order = ['symbol', 'sharesOutstanding'] In [6]: df = df[new_order] In [7]: df Out[7]: symbol sharesOutstanding 0 UIS 51013181 1 AAPL 4829926000
using isin
filter based on index and list of values
df[df.index.isin( list_foo )]
tags | using isin on index
OSError: [Errno 22] Invalid argument: '<stdout>'
To fix
$ python sp500_tickers.py Traceback (most recent call last): File "sp500_tickers.py", line 60, in <module> tickers.to_csv(sys.stdout, header=False) File "C:\ProgramData\Continuum\Anaconda\envs\market_data_processor\lib\site-packages\pandas\core\frame.py", line 1745, in to_csv formatter.save() File "C:\ProgramData\Continuum\Anaconda\envs\market_data_processor\lib\site-packages\pandas\io\formats\csvs.py", line 165, in save compression=self.compression) File "C:\ProgramData\Continuum\Anaconda\envs\market_data_processor\lib\site-packages\pandas\io\common.py", line 400, in _get_handle f = open(path_or_buf, mode, encoding=encoding) OSError: [Errno 22] Invalid argument: '<stdout>'
I upgraded pandas from 0.23.1-py36h830ac7b_0 to 0.23.4-py36h830ac7b_0
size of dataframe in bytes
import sys sys.getsizeof(df)
As described in http://raju.shoutwiki.com/wiki/Python_notes#getsizeof_pitfalls , this will not give a "deep" size. If an element in a dataframe is a reference, it will not be included in the size calculation.
location of pandas
tags | pandas file, module path
import pandas as pd pd.__file__
count number of unique values in a column
Related:
- df['foo'].nunique() - count distinct values
- df['foo'].count() - count non-null values
- df['foo'].size - count all including null values
In [1]: import pandas as pd ...: df = pd.DataFrame({'A': [2, 3, 5, 7], 'B': [3, None, 3, 2]}) ...: df Out[1]: A B 0 2 3.0 1 3 NaN 2 5 3.0 3 7 2.0 In [2]: df['B'].nunique() Out[2]: 2 In [3]: df['B'].count() Out[3]: 3 In [4]: df['B'].size Out[4]: 4
cross tables
Sample notebook - github.com/KamarajuKusumanchi
tags | convert values in csv data to column names of a dataframe
expand dataframe into three columns
- pd.melt()
search tags | opposite of crosstab, unpivot, opposite of pivot_table, tabulate a dataframe, print dataframe one element per row, "print all elements in a dataframe with index, column name, value", row col value
Related links:- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html
Coordinates of an element in a csv file
Cheat sheets
- https://www.webpages.uidaho.edu/~stevel/504/Pandas%20DataFrame%20Notes.pdf - easy to read, compact, covers a lot of stuff.
External links
- http://pandas.pydata.org/pandas-docs/stable/groupby.html - shows "group by" functionalities in pandas. Very well written with examples and stuff. Worth reading from top to bottom.
- http://bconnelly.net/2013/10/summarizing-data-in-python-with-pandas/ - Shows how to use pandas.groupby() to summarize data.
- https://stackoverflow.com/a/32801170/6305733 - shows how to count the number of rows in each group of a groupby object
- https://tomaugspurger.github.io/method-chaining.html - describes the pipe() functionality using which a dataframe can be passed to another function using a chain like syntax.
- The Python -> "Data Wrangling" section in http://chrisalbon.com/ is pretty useful. It contains solutions to real world problems you come across when working with data.
- http://pandas.pydata.org/pandas-docs/stable/10min.html - useful reference. Short and concise.
Tasks
Frequent stuff
Common use cases involving DataFrames
For a complete list, see http://pandas.pydata.org/pandas-docs/stable/api.html#index
Use case | Solution | See also |
---|---|---|
Get the number of rows and columns |
|
DataFrame.shape |
Select rows when columns contain certain values tags | not in |
|
|
Get N distinct values | df['name'].unique()[:N] | Series.unique |
Get all distinct values | df['name'].unique() | Series.unique |
Limit dataframe to N distinct values of a column |
def limit_distinct(df, col, N): v = df[col].unique()[:N] return df[ df[col].isin(v) ] df.pipe(limit_distinct, 'name', N) |
|
summary stats of a column | df['foo'].func() where func is something like mean, sum, std, median, min, max |
|
Set a string value to missing | df['foo'].replace('bar', None) | |
select first 10 rows | df[:10] |
API of frequently used dataframe functions
tags | documentation links
pd.concat()
Pandas 0.23.0 added sort=True option to pd.concat() with None as the default. Starting from Pandas 1.0.0, the default will be changed to False. Prior to 0.23.0, pd.concat() behaved as if sort=True was used.
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html
- https://pandas.pydata.org/pandas-docs/version/0.25.3/reference/api/pandas.concat.html
- https://pandas.pydata.org/pandas-docs/version/0.19.2/generated/pandas.concat.html
quick ref
pandas.DataFrame.drop
- by default it is out of place.
- To drop columns,
.drop(labels=['foo', 'bar'], axis=1)
Ref:-
pandas.DataFrame.rename
- by default it is out of place.
- To rename columns
.rename(columns={'foo1': 'bar1', 'foo2': 'bar2'})
Ref:-