Manipulating dataframes in python

From raju

Contents

Create a dataframe

Create a data frame from two lists

On the interpreter

    >>> import pandas as pd
    >>> df = pd.DataFrame({'letters': ['a', 'b', 'c', 'd'], 'numbers': [1,2,3,4]})
    >>> df
      letters  numbers
    0       a        1
    1       b        2
    2       c        3
    3       d        4
    

In the code

    % cat create_df_from_lists.py
    import pandas as pd
    letters = ['a', 'b', 'c']
    words = ['apple', 'ball', 'cat']
    df = pd.DataFrame(
        {'letter': letters,
         'word': words})
    
    print("letters:\n", letters, "\n")
    print("words:\n", words, "\n")
    print("df:\n", df, "\n")
    

Sample run:

     % python3 -u create_df_from_lists.py
    letters:
     ['a', 'b', 'c'] 
    
    words:
     ['apple', 'ball', 'cat'] 
    
    df:
       letter   word
    0      a  apple
    1      b   ball
    2      c    cat
    

Create a dataframe from a list of dictionaries

    >>> import pandas as pd
    >>> d = [{'points': 50, 'time': '5:00', 'year': 2010}, 
    ... {'points': 25, 'time': '6:00', 'month': "february"}, 
    ... {'points':90, 'time': '9:00', 'month': 'january'}, 
    ... {'points_h1':20, 'month': 'june'}]
    >>> df = pd.DataFrame(d)
    >>> print(df)
          month  points  points_h1  time    year
    0       NaN    50.0        NaN  5:00  2010.0
    1  february    25.0        NaN  6:00     NaN
    2   january    90.0        NaN  9:00     NaN
    3      june     NaN       20.0   NaN     NaN
    

Ref: http://stackoverflow.com/questions/20638006/convert-list-of-dictionaries-to-dataframe

Create an empty data frame

    >>> import pandas as pd
    >>> a = pd.DataFrame(None)
    >>> a
    Empty DataFrame
    Columns: []
    Index: []
    >>> type(a)
    <class 'pandas.core.frame.DataFrame'>
    

To check if the dataframe is empty

    >>> a.empty
    True
    

Create an empty dataframe with column names

    >>> import pandas as pd
    >>> a = pd.DataFrame(columns=['x', 'y', 'z'])
    >>> a
    Empty DataFrame
    Columns: [x, y, z]
    Index: []
    >>> type(a)
    <class 'pandas.core.frame.DataFrame'>
    

Check that it is an empty dataframe

    >>> a.empty
    True
    

Create a new dataframe with fewer columns

To select the foo and bar columns from all_df dataframe and create a new dataframe called df

    df = all_df[['foo', 'bar']].copy()
    

dummy

print either to file or to stdout

    df.to_csv(out_file if out_file else sys.stdout,
              index=False)
    

print all values in a pandas series

convert int64 YYYYMMDD to datetime64

    df['date'] = pd.to_datetime(df['date'], format='%Y%m%d')
    

extract first 8 characters of a column in a dataframe

    >>> a['Date']
    0    20160201.0
    1    20160201.0
    2    20160201.0
    3    20160104.0
    4    20160104.0
    5    20160104.0
    6    20161201.0
    7    20161201.0
    8    20161201.0
    Name: Date, dtype: object
    >>> a['Date'].str[:8]
    0    20160201
    1    20160201
    2    20160201
    3    20160104
    4    20160104
    5    20160104
    6    20161201
    7    20161201
    8    20161201
    Name: Date, dtype: object
    

Ref:- http://stackoverflow.com/questions/20970279/how-to-do-a-left-right-and-mid-of-a-string-in-a-pandas-dataframe

Iterate over each month

    import pandas as pd
    
    from pandas.tseries.offsets import *
    for end_dt in pd.date_range('20160110', '20160920', freq='M'):
        begin_dt = end_dt + MonthBegin(n=-1)
        end_dt_yyyymmdd = end_dt.strftime('%Y%m%d')
        begin_dt_yyyymmdd = begin_dt.strftime('%Y%m%d')
        print(begin_dt_yyyymmdd, end_dt_yyyymmdd)
    

will produce

    20160101 20160131
    20160201 20160229
    20160301 20160331
    20160401 20160430
    20160501 20160531
    20160601 20160630
    20160701 20160731
    20160801 20160831
    

Using

    pd.date_range('20160110', '20160930', freq='M')
    

will produce

    20160101 20160131
    20160201 20160229
    20160301 20160331
    20160401 20160430
    20160501 20160531
    20160601 20160630
    20160701 20160731
    20160801 20160831
    20160901 20160930
    

Iterate over each quarter

    import pandas as pd
    
    from pandas.tseries.offsets import *
    for end_dt in pd.date_range('20140101', '20160930', freq='Q')[::-1]:
        begin_dt = end_dt + MonthBegin(n=-3)
        end_dt_yyyymmdd = end_dt.strftime('%Y%m%d')
        begin_dt_yyyymmdd = begin_dt.strftime('%Y%m%d')
        print(begin_dt_yyyymmdd, end_dt_yyyymmdd)
    

will produce

    20160701 20160930
    20160401 20160630
    20160101 20160331
    20151001 20151231
    20150701 20150930
    20150401 20150630
    20150101 20150331
    20141001 20141231
    20140701 20140930
    20140401 20140630
    20140101 20140331
    

Conditional assignment

    >>> import pandas as pd
    >>> df = pd.DataFrame({'letters': ['a', 'b', 'c', 'd'], 'numbers': [1,2,3,4]})
    >>> df
      letters  numbers
    0       a        1
    1       b        2
    2       c        3
    3       d        4
    >>> df['new'] = 'default'
    >>> df
      letters  numbers      new
    0       a        1  default
    1       b        2  default
    2       c        3  default
    3       d        4  default
    >>> df.loc[df['numbers'] > 2.5, 'new'] = 'b+'
    >>> df
      letters  numbers      new
    0       a        1  default
    1       b        2  default
    2       c        3       b+
    3       d        4       b+
    

filter rows by conditions

Capture rows by conditioning on two columns

    mask_foo = (df['foo'] == 'FOO') & (df['bar'] >= 100)
    mask_sec = (df['foo'] == 'SEC') & (df['bar'] >= 500)
    df2 = df[ (mask_foo | mask_sec) ]
    

Remove null entries and everything less than 500 in column foo

    mask_foo = (pd.isnull(df['foo'])) | \
               (df['foo'] < 500)
    df2 = df[~ mask]
    

tags | logical

Joining two dataframes in pandas

To merge data frames a and b on column 'foo' and store the result in a new data frame, m

    import pandas as pd
    ...
    m = pd.merge(a, b, on='foo')
    

Ref:-

Inner joining two dataframes

Note that when two dataframes are inner joined, the resulting dataframe can potentially be larger than both data frames. This can happen if there are multiple rows in either data frame over the "joint" columns. For example, consider

    >>> import pandas as pd
    >>> df1 = pd.DataFrame([[1, 3], [1, 4]], columns=['A', 'B'])
    >>> df1
       A  B
    0  1  3
    1  1  4
    >>> df2 = pd.DataFrame([[1, 5], [1, 6]], columns=['A', 'C'])
    >>> df2
       A  C
    0  1  5
    1  1  6
    >>> df3 = pd.merge(df1, df2, on='A', how='inner')
    >>> df3
       A  B  C
    0  1  3  5
    1  1  3  6
    2  1  4  5
    3  1  4  6
    

which shows 4 rows in df3 even though it was created by inner joining two data frames that each have 2 rows.

If the duplicates are not expected, try cleaning the data using pd.drop_duplicates()

Ref:- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html

Dump dataframe to a gzip file

https://github.com/KamarajuKusumanchi/sampleusage/blob/master/python/pandas/df_to_gzip.py

get duplicates

     % python3
    Python 3.5.3rc1 (default, Jan  3 2017, 04:40:57) 
    [GCC 6.3.0 20161229] on linux
    
    >>> import pandas as pd
    >>> a = pd.DataFrame({'isp': ['comcast', 'telmex', 'comcast'], 'country' : ['us', 'mexico', 'us']})
    >>> a
      country      isp
    0      us  comcast
    1  mexico   telmex
    2      us  comcast
    
    >>> a[a.duplicated()]
      country      isp
    2      us  comcast
    

Ref:- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html


drop duplicates

    >>> import pandas as pd
    >>> df = pd.DataFrame([[1,2], [3,4], [5, 6], [7, 8], [5, 4]], columns=list('AB'))
    >>> df
       A  B
    0  1  2
    1  3  4
    2  5  6
    3  7  8
    4  5  4
    >>> df.drop_duplicates()
       A  B
    0  1  2
    1  3  4
    2  5  6
    3  7  8
    4  5  4
    >>> df.drop_duplicates(subset=["B"])
       A  B
    0  1  2
    1  3  4
    2  5  6
    3  7  8
    >>> df.drop_duplicates(subset=["B"], keep='last')
       A  B
    0  1  2
    2  5  6
    3  7  8
    4  5  4
    >>> df.drop_duplicates(subset=["B"], keep=False)
       A  B
    0  1  2
    2  5  6
    3  7  8
    

Ref:- http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.drop_duplicates.html

drop duplicate columns with different column names

     % python3
    Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
    [GCC 6.3.0 20170118] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pandas as pd
    >>> df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [1,2,3]})
    >>> df
       a  b  c
    0  1  4  1
    1  2  5  2
    2  3  6  3
    >>> df2 = df.T.drop_duplicates().T
    >>> df2
       a  b
    0  1  4
    1  2  5
    2  3  6
    

Note:- The following does not work since the duplicate columns do not have the same name.

    >>> df3 = df.loc[:,~df.columns.duplicated()]
    >>> df3
       a  b  c
    0  1  4  1
    1  2  5  2
    2  3  6  3
    

Select rows with multiple constraints

    df[(df['foo'] >= FOO) & (df['bar'] <= BAR)]
    

where foo, bar are columns in dataframe df, FOO, BAR are some thresholds.

See also:

delete columns in a dataframe

To delete one column

    df = df.drop('column_name', 1)
    

where 1 is the axis number (0 for rows and 1 for columns.). The default is 0.

To delete a column in place

    df.drop('column_name', axis=1, inplace=True)
    

To delete a column by number, e.g. the 1st, 2nd and 4th columns:

    df.drop(df.columns[[0, 1, 3]], axis=1)  # df.columns is zero-based pd.Index
    

Ref:-

read dataframe from stdin

tags | read_csv stdin

     % ls
    data.csv  read_df_from_stdin.py
    
     % cat data.csv
    x,a,b
    10,2,2
    2,100,4
    8,5,3
    
     % cat read_df_from_stdin.py 
    #!/usr/bin/env python3
    
    import sys
    import pandas as pd
    
    df = pd.read_csv(sys.stdin)
    print(df)
    
    print(df.describe())
    
     % chmod +x read_df_from_stdin.py
    
     % cat data.csv | ./read_df_from_stdin.py 
        x    a  b
    0  10    2  2
    1   2  100  4
    2   8    5  3
                   x           a    b
    count   3.000000    3.000000  3.0
    mean    6.666667   35.666667  3.0
    std     4.163332   55.734490  1.0
    min     2.000000    2.000000  2.0
    25%     5.000000    3.500000  2.5
    50%     8.000000    5.000000  3.0
    75%     9.000000   52.500000  3.5
    max    10.000000  100.000000  4.0
    

Ref:-

mean response when predictor is nonzero

Consider

     % cat train.csv 
    y,X0,X1,X2
    4.8,a,0,1
    8.8,a,1,1
    7.6,b,0,1
    8.1,b,1,1
    7.8,b,0,0
    9.3,c,1,0
    

where y is the response variable and X0, X1 and X2 are predictors. X1 and X2 are binary predictors (meaning they can either be 0 or 1), X0 is a categorical variable that can take values a, b, c. The idea here is to find the mean of the response variable when the predictor is true. For X0, we want to find the mean for each category.

Sample code

    import pandas as pd
    df_raw = pd.read_csv("train.csv")
    print(df_raw)
    df = pd.get_dummies(df_raw)
    print(df)
    
    ycol = 'y'
    xcols = ['X1', 'X2', 'X0_a', 'X0_b', 'X0_c']
    response = pd.DataFrame(columns=xcols, index=['mean', 'std', 'score'])
    for xcol in xcols:
        mean_value = df.loc[df[xcol] == 1, ycol].mean()
        std_value = df.loc[df[xcol] == 1, ycol].std()
        score = mean_value/std_value
        response.loc['mean', xcol] = mean_value
        response.loc['std', xcol] = std_value
        response.loc['score', xcol] = score
    
    print(response)
    

Sample output

         y X0  X1  X2
    0  4.8  a   0   1
    1  8.8  a   1   1
    2  7.6  b   0   1
    3  8.1  b   1   1
    4  7.8  b   0   0
    5  9.3  c   1   0
         y  X1  X2  X0_a  X0_b  X0_c
    0  4.8   0   1     1     0     0
    1  8.8   1   1     1     0     0
    2  7.6   0   1     0     1     0
    3  8.1   1   1     0     1     0
    4  7.8   0   0     0     1     0
    5  9.3   1   0     0     0     1
                 X1       X2     X0_a      X0_b X0_c
    mean    8.73333    7.325      6.8   7.83333  9.3
    std    0.602771  1.75381  2.82843  0.251661  NaN
    score   14.4886  4.17663  2.40416   31.1265  NaN
    

remove blank columns in a dataframe

To remove columns where all values are missing

    in_file='input.csv'
    out_file='output.csv'
    df = pd.read_csv(in_file, dtype=object)
    filter = (df['COL_FOO'] == 'bar')
    df_small = df[filter].dropna(axis=1, how='all')
    df_small.to_csv(out_file, sep=',', index=False)
    

Combine two dataframes by appending columns

    df_all = pd.concat([df1, df2], axis=1)
    

Ref:- "Concatenating objects" section in https://pandas.pydata.org/pandas-docs/stable/merging.html

Select columns based on dtype

Use pd.select_dtypes() to decompose data based on its type. For example

    def decompose_data(df):
        d = {}
        d['float'] = df.select_dtypes(include=[np.float])
        d['int']   = df.select_dtypes(include=[np.int])
        d['object'] = df.select_dtypes(include=[np.object])
        return d
    

Ref:-

tags | get all integer columns in a dataframe

Remove columns that are all zero

Append two dataframes

>>> import pandas as pd
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
>>> df
   A  B
0  1  2
1  3  4
>>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'))
>>> df2
   A  B
0  5  6
1  7  8
>>> df_merge = df.append(df2, ignore_index=True)
>>> df_merge
   A  B
0  1  2
1  3  4
2  5  6
3  7  8

This will not modify df, df2.


By default, duplicates are not eliminated. Use drop_duplicates() for that.

>>> df3 = pd.DataFrame([[5, 6], [7, 8], [3, 4]], columns=list('AB'))
>>> df3
   A  B
0  5  6
1  7  8
2  3  4
>>> df_merge = df.append(df3, ignore_index=True)
>>> df_merge
   A  B
0  1  2
1  3  4
2  5  6
3  7  8
4  3  4
>>> df_merge = df.append(df3, ignore_index=True).drop_duplicates()
>>> df_merge
   A  B
0  1  2
1  3  4
2  5  6
3  7  8

Missing entries will be filled by NaN.

>>> df4 = pd.DataFrame([[5, 6, 7], [7, 8, 9]], columns=list('ABC'))
>>> df4
   A  B  C
0  5  6  7
1  7  8  9
>>> df_merge = df.append(df4, ignore_index=True).drop_duplicates()
>>> df_merge
   A  B    C
0  1  2  NaN
1  3  4  NaN
2  5  6  7.0
3  7  8  9.0

You can also use pd.concat()

>>> df_merge = pd.concat([df, df3], ignore_index=True).drop_duplicates()
>>> df_merge
   A  B
0  1  2
1  3  4
2  5  6
3  7  8

read multiple csv files into a dataframe

    import pandas as pd
    all_files = ("file_1.txt", "file_2.txt")
    dfg = (pd.read_csv(f, sep=',', low_memory=False) for f in all_files)
    df = pd.concat(dfg, ignore_index=True)
    

another way

    import pandas as pd
    all_files = ("file_1.txt", "file_2.txt")
    frames = []
    for f in all_files:
        if not os.path.isfile(f):
            print("Warning: File", f, "does not exist. Skipping it.")
            continue
        cur_frame = pd.read_csv(f, low_memory=False)
        frames.append(cur_frame)
    df = pd.concat(frames)
    

Ref:- http://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe

number of dates between two time series

    >>> a[['end_date', 'start_date']].head()
                      end_date           start_date
    0  2016-09-30 00:00:00.000  2008-02-14 00:00:00
    1  2016-09-30 00:00:00.000  2015-01-23 00:00:00
    2  2016-09-30 00:00:00.000  2014-09-29 00:00:00
    3  2016-09-30 00:00:00.000  2014-09-29 00:00:00
    4  2016-09-30 00:00:00.000  2010-09-14 00:00:00
    
    >>> age = (pd.to_datetime(a['end_date']) - pd.to_datetime(a['start_date']))
    >>> type(age)
    <class 'pandas.core.series.Series'>
    >>> age.head()
    0   3151 days
    1    616 days
    2    732 days
    3    732 days
    4   2208 days
    dtype: timedelta64[ns]
    

To convert it to a number

    >>> age = (pd.to_datetime(a['end_date']) - pd.to_datetime(a['start_date']))/np.timedelta64(1, 'D')
    >>> type(age)
    <class 'pandas.core.series.Series'>
    >>> age.head()
    0    3151.0
    1     616.0
    2     732.0
    3     732.0
    4    2208.0
    dtype: float64
    

convert the column names of a dataframe to lower case

    df.rename(columns=lambda x: x.lower(), inplace=True)
    

Use case: While merging data from two data frames using DataFrame.merge(), I ended up with two columns with same name but differing in case (ex: foo from df1, FOO from df2). This caused problems when I tried to upload data into a hadoop cluster since hive is not case sensitive. As a work around, I converted the column names in df2 to lower case and then merged using pd.merge(df1, df2, ..., suffixes = ('_df1', '_df2')). The resulting data frame will now have foo_df1, foo_df2 columns.

count categories

    >>> df2
         Department  Lottery  Literacy  Wealth Region
    1         Aisne       38        51      22      N
    2        Allier       66        13      61      C
    3  Basses-Alpes       80        46      76      E
    4  Hautes-Alpes       79        69      83      E
    5       Ardeche       70        27      84      S
    6      Ardennes       31        67      33      N
    7        Ariege       75        18      72      S
    8          Aube       28        59      14      E
    9          Aude       50        34      17      S
    
    >>> df2['Region'].value_counts()
    S    3
    E    3
    N    2
    C    1
    

add a sequence of numbers as a column to dataframe

    
    In [49]: df
    Out[49]: 
            y
    0  169.91
    1  265.32
    2  158.53
    3  160.87
    4  167.45
    5  158.23
    6  165.52
    7  155.62
    
    In [50]: df['rownum'] = range(1, df.shape[0]+1)
    
    In [51]: df
    Out[51]: 
            y  rownum
    0  169.91       1
    1  265.32       2
    2  158.53       3
    3  160.87       4
    4  167.45       5
    5  158.23       6
    6  165.52       7
    7  155.62       8
    
    In [52]: df.drop('rownum', axis=1, inplace=True)
    
    In [53]: df
    Out[53]: 
            y
    0  169.91
    1  265.32
    2  158.53
    3  160.87
    4  167.45
    5  158.23
    6  165.52
    7  155.62
    

misc task

Task: Check if the values in a column of a dataframe exist among the values in a column of another dataframe. Add the result as a new column to the first data frame.

    In [47]: a = pd.DataFrame({'pkg': ['kdegraphics-strigi-analyzer', 'kdesdk-strigi-plugins', 'libclucene-core1', 'libstreamanalyzer0', 'libzmq3']}); b = pd.DataFrame({'package': ['libzmq3', 'python3.4', 'kdesdk-strigi-plugins']})
    
    In [48]: a
    Out[48]: 
                               pkg
    0  kdegraphics-strigi-analyzer
    1        kdesdk-strigi-plugins
    2             libclucene-core1
    3           libstreamanalyzer0
    4                      libzmq3
    
    In [49]: b
    Out[49]: 
                     package
    0                libzmq3
    1              python3.4
    2  kdesdk-strigi-plugins
    
    In [50]: a['exists'] = a['pkg'].isin(b['package'])
    
    In [51]: a
    Out[51]: 
                               pkg exists
    0  kdegraphics-strigi-analyzer  False
    1        kdesdk-strigi-plugins   True
    2             libclucene-core1  False
    3           libstreamanalyzer0  False
    4                      libzmq3   True
    

unsorted

  • To print all column names in a data frame - df.columns.values
  • Number of missing values in a dataframe - df.isnull().sum()

select only some columns

  • To select columns
df2 = df1[['col1', 'col2']]

Ref:- http://pandas.pydata.org/pandas-docs/stable/indexing.html

experiment with get_dummies

    >>> import pandas as pd
    >>> df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'a', 'c'], 'C': [1, 2, 3]})
    >>> df
       A  B  C
    0  a  b  1
    1  b  a  2
    2  a  c  3
    
    >>> pd.get_dummies(df)
       C  A_a  A_b  B_a  B_b  B_c
    0  1  1.0  0.0  0.0  1.0  0.0
    1  2  0.0  1.0  1.0  0.0  0.0
    2  3  1.0  0.0  0.0  0.0  1.0
    >>> pd.get_dummies(df, columns=['A'])
       B  C  A_a  A_b
    0  b  1  1.0  0.0
    1  a  2  0.0  1.0
    2  c  3  1.0  0.0
    >>> pd.get_dummies(df, columns=['B'])
       A  C  B_a  B_b  B_c
    0  a  1  0.0  1.0  0.0
    1  b  2  1.0  0.0  0.0
    2  a  3  0.0  0.0  1.0
    >>> pd.get_dummies(df, columns=['A', 'B'])
       C  A_a  A_b  B_a  B_b  B_c
    0  1  1.0  0.0  0.0  1.0  0.0
    1  2  0.0  1.0  1.0  0.0  0.0
    2  3  1.0  0.0  0.0  0.0  1.0
    >>> pd.get_dummies(df, columns=['B', 'A'])
       C  B_a  B_b  B_c  A_a  A_b
    0  1  0.0  1.0  0.0  1.0  0.0
    1  2  1.0  0.0  0.0  0.0  1.0
    2  3  0.0  0.0  1.0  1.0  0.0
    

Ref:- http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.get_dummies.html

call function on each group

    grouped = df.groupby('column_foo')
    frames = []
    for id, df_id in grouped:
        new_df_id = df_id.func_bar()
        frames.append(new_df_id)
    if not frames:
        new_df = pd.DataFrame(None)
    else:
        new_df = pd.concat(frames)
    
    return new_df
    

tags | groupby call function

Ref:- http://pandas.pydata.org/pandas-docs/stable/groupby.html

preserve formatting of columns

set dtype to object to preserve the formatting of the columns. This is useful if we want to dump data after adding or removing certain columns.

    df = pd.read_csv(fname, dtype=object)
    

deprecated

  • DataFrame.sort is deprecated. Use sort_values instead.
    myscript.py:57: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)
      na_position='last')
    

Sum of values in a column when another column is 1

Consider the dataframe

    >>> df = pd.DataFrame({'a':[1,1,2,1,2], 'b':[5,7,3,3,5]})
    >>> df
       a  b
    0  1  5
    1  1  7
    2  2  3
    3  1  3
    4  2  5
    

To get the sum of values of b when column a is 1

    >>> df.loc[df['a'] == 1, 'b'].sum()
    15
    

To get the sum of values of b when column a is 2

    >>> df.loc[df['a'] == 2, 'b'].sum()
    8
    

iterate over each column of a dataframe except one

    cols = df.columns.tolist()
    cols.remove('foo')
    for col in cols:
        // do something
    

External links

Frequent stuff

Common use cases involving DataFrames

For a complete list, see http://pandas.pydata.org/pandas-docs/stable/api.html#index

Use case Solution See also
Get the number of rows and columns
  • rows = df.shape[0]
  • cols = df.shape[1]
  • (rows, cols) = df.shape
DataFrame.shape
Select rows when columns contain certain values
  • df[df['name'].isin(value_list)]
  • df[~df['name'].isin(value_list)]
Get N distinct values df['name'].unique()[:N] Series.unique
Get all distinct values df['name'].unique() Series.unique
Limit dataframe to N distinct values of a column
def limit_distinct(df, col, N):
    v = df[col].unique()[:N]
    return df[ df[col].isin(v) ]
df.pipe(limit_distinct, 'name', N)
summary stats of a column df['foo'].func() where func is something like

mean, sum, std, median, min, max

Set a string value to missing df['foo'].replace('bar', None)
select first 10 rows df[:10]

API of frequently used dataframe functions

Name link
pandas.read_csv http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
pandas.merge http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html
df.replace http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html