Python/multiprocessing notes

From raju

< Python

see exception in the child process

Note:- This is an adoptation of https://seasonofcode.com/posts/python-multiprocessing-and-exceptions.html for python2. The original article uses Python 3 and the code is a bit abstract. Here I present an example than can actually be run.

One problem with the multiprocessing module is that exceptions in spawned child processes do not print stack traces.

Consider the following:

    # Adapted from https://seasonofcode.com/posts/python-multiprocessing-and-exceptions.html
    
    import multiprocessing
    
    def f(x):
      return 1.0 / x
    
    if __name__ == '__main__':
        pool = multiprocessing.Pool(5)
        tasks = [pool.apply_async(f, (i,))
                 for i in range(5)]
        [task.get() for task in tasks]
    

and the following error message:

    Traceback (most recent call last):
      File "trace_01.py", line 12, in <module>
        [task.get() for task in tasks]
      File "C:\ProgramData\Continuum\Anaconda\lib\multiprocessing\pool.py", line 567, in get
        raise self._value
    ZeroDivisionError: float division by zero
    

using:

    $python2 --version
    Python 2.7.13 :: Anaconda custom (64-bit)
    

It is not clear what triggered the ZeroDivisionError. We only see the stack trace of the main process but not the stack trace of the code that actually triggered the exception in the worker process.

To fix this, catch the exception inside the worker process and print it.

    # Adapted from https://seasonofcode.com/posts/python-multiprocessing-and-exceptions.html
    
    import multiprocessing
    import traceback
    
    def f(x):
        try:
            return 1.0 / x
        except Exception as e:
            print 'Caught exception in worker thread (x = %d):' % x
    
            # This prints the type, value, and stack trace of the
            # current exception being handled.
            traceback.print_exc()
    
            print
            raise e
    
    if __name__ == '__main__':
        pool = multiprocessing.Pool(5)
        tasks = [pool.apply_async(f, (i,))
                 for i in range(5)]
        [task.get() for task in tasks]
    

Running this will give:

    Caught exception in worker thread (x = 0):
    Traceback (most recent call last):
      File "H:\work\python\python2\multiprocessing_traceback\trace_02.py", line 8, in f
        return 1.0 / x
    ZeroDivisionError: float division by zero
    
    Traceback (most recent call last):
      File "trace_02.py", line 23, in <module>
        [task.get() for task in tasks]
      File "C:\ProgramData\Continuum\Anaconda\lib\multiprocessing\pool.py", line 567, in get
        raise self._value
    ZeroDivisionError: float division by zero
    

which reveals the actual culprit.

exception during reset or similar

Symptom:-

A python code that worked in Windows was giving an exception in Linux

    ERROR - Exception during reset or similar
    

The code connects to multiple databases by using sqlalchemy.create_engine(). Then it gets some data by using pandas.read_sql_query(). After that it starts a multiprocessing.Pool(). Each of those child processes connect to the same databases and get some more additional data. However the child processes try to reuse existing engines created by the parent process. This was causing the exception above.

How I fixed it:-

After the parent process collected the required data, call Engine.dispose().

Ref:-

apply a function on each row of dataframe in pool

    import multiprocessing
    
    def MyFunc(args):
        foo = args[0]
        bar = args[1]
        # Do something to generate a response
        # return foo along with the response
        return foo, response
    
    # populate the following
    #   num_workers- number of workers
    #   df - data frame. The idea is to call MyFunc on each row of df.
    
    pool = multiprocessing.Pool(num_workers)
    for (foo, response) in pool.imap_unordered(
            MyFunc,
            [(row.foo, row.bar) for row in df.itertuples(index=True, name='Pandas')]):
        # do something with response, foo
        # For example. to add the result as another column
        # in the dataframe
        df.loc[df['foo'] == foo, 'response']] = response
    pool.close()
    pool.join()
    

tags | pandas multiprocessing pool apply, store return value of multiprocessing as another column

demonstrates | add result as another column to the dataframe when processing its rows in parallel using a pool