:mod:`airflow.executors.local_executor`
=======================================

.. py:module:: airflow.executors.local_executor

.. autoapi-nested-parse::

   LocalExecutor runs tasks by spawning processes in a controlled fashion in different
   modes. Given that BaseExecutor has the option to receive a `parallelism` parameter to
   limit the number of process spawned, when this parameter is `0` the number of processes
   that LocalExecutor can spawn is unlimited.

   The following strategies are implemented:
   1. Unlimited Parallelism (self.parallelism == 0): In this strategy, LocalExecutor will
   spawn a process every time `execute_async` is called, that is, every task submitted to the
   LocalExecutor will be executed in its own process. Once the task is executed and the
   result stored in the `result_queue`, the process terminates. There is no need for a
   `task_queue` in this approach, since as soon as a task is received a new process will be
   allocated to the task. Processes used in this strategy are of class LocalWorker.

   2. Limited Parallelism (self.parallelism > 0): In this strategy, the LocalExecutor spawns
   the number of processes equal to the value of `self.parallelism` at `start` time,
   using a `task_queue` to coordinate the ingestion of tasks and the work distribution among
   the workers, which will take a task as soon as they are ready. During the lifecycle of
   the LocalExecutor, the worker processes are running waiting for tasks, once the
   LocalExecutor receives the call to shutdown the executor a poison token is sent to the
   workers to terminate them. Processes used in this strategy are of class QueuedLocalWorker.

   Arguably, `SequentialExecutor` could be thought as a LocalExecutor with limited
   parallelism of just 1 worker, i.e. `self.parallelism = 1`.
   This option could lead to the unification of the executor implementations, running
   locally, into just one `LocalExecutor` with multiple modes.










Module Contents
---------------






.. py:class:: LocalWorker(result_queue)

   Bases::class:`multiprocessing.Process`, :class:`airflow.utils.log.logging_mixin.LoggingMixin`

   

   LocalWorker Process implementation to run airflow commands. Executes the given
   command and puts the result into a result queue when done, terminating execution.


   

   

   

   .. method:: execute_work(self, key, command)

      
      Executes command received and stores result state in queue.
      :param key: the key to identify the TI
      :type key: tuple(dag_id, task_id, execution_date)
      :param command: the command to execute
      :type command: str

      



   

   .. method:: run(self)

      











.. py:class:: QueuedLocalWorker(task_queue, result_queue)

   Bases::class:`airflow.executors.local_executor.LocalWorker`

   

   LocalWorker implementation that is waiting for tasks from a queue and will
   continue executing commands as they become available in the queue. It will terminate
   execution once the poison token is found.


   

   

   

   .. method:: run(self)

      











.. py:class:: LocalExecutor

   Bases::class:`airflow.executors.base_executor.BaseExecutor`

   

   LocalExecutor executes tasks locally in parallel. It uses the
   multiprocessing Python library and queues to parallelize the execution
   of tasks.


   

   

   .. py:class:: _UnlimitedParallelism(executor)

      Bases::class:`object`

      

      Implements LocalExecutor with unlimited parallelism, starting one process
      per each command to execute.


      

      

      

      .. method:: start(self)

         



      

      .. method:: execute_async(self, key, command)

         
         :param key: the key to identify the TI
         :type key: tuple(dag_id, task_id, execution_date)
         :param command: the command to execute
         :type command: str

         



      

      .. method:: sync(self)

         



      

      .. method:: end(self)

         





   

   

   .. py:class:: _LimitedParallelism(executor)

      Bases::class:`object`

      

      Implements LocalExecutor with limited parallelism using a task queue to
      coordinate work distribution.


      

      

      

      .. method:: start(self)

         



      

      .. method:: execute_async(self, key, command)

         
         :param key: the key to identify the TI
         :type key: tuple(dag_id, task_id, execution_date)
         :param command: the command to execute
         :type command: str

         



      

      .. method:: sync(self)

         



      

      .. method:: end(self)

         





   

   

   .. method:: start(self)

      



   

   .. method:: execute_async(self, key, command, queue=None, executor_config=None)

      



   

   .. method:: sync(self)

      



   

   .. method:: end(self)