airflow.providers.cncf.kubernetes.executors.local_kubernetes_executor

Module Contents

Classes

LocalKubernetesExecutor

Chooses between LocalExecutor and KubernetesExecutor based on the queue defined on the task.

class airflow.providers.cncf.kubernetes.executors.local_kubernetes_executor.LocalKubernetesExecutor(local_executor, kubernetes_executor)[source]

Bases: airflow.utils.log.logging_mixin.LoggingMixin

Chooses between LocalExecutor and KubernetesExecutor based on the queue defined on the task.

When the task’s queue is the value of kubernetes_queue in section [local_kubernetes_executor] of the configuration (default value: kubernetes), KubernetesExecutor is selected to run the task, otherwise, LocalExecutor is used.

property queued_tasks: dict[airflow.models.taskinstance.TaskInstanceKey, airflow.executors.base_executor.QueuedTaskInstanceType][source]

Return queued tasks from local and kubernetes executor.

property running: set[airflow.models.taskinstance.TaskInstanceKey][source]

Return running tasks from local and kubernetes executor.

property job_id: str | None[source]

Inherited attribute from BaseExecutor.

Since this is not really an executor, but a wrapper of executors we implemented it as property, so we can have custom setter.

property slots_available: int[source]

Number of new tasks this executor instance can accept.

supports_ad_hoc_ti_run: bool = True[source]
supports_pickling: bool = False[source]
supports_sentry: bool = False[source]
is_local: bool = False[source]
is_single_threaded: bool = False[source]
is_production: bool = True[source]
serve_logs: bool = True[source]
change_sensor_mode_to_reschedule: bool = False[source]
callback_sink: airflow.callbacks.base_callback_sink.BaseCallbackSink | None[source]
KUBERNETES_QUEUE[source]
start()[source]

Start local and kubernetes executor.

queue_command(task_instance, command, priority=1, queue=None)[source]

Queues command via local or kubernetes executor.

queue_task_instance(task_instance, mark_success=False, pickle_id=None, ignore_all_deps=False, ignore_depends_on_past=False, wait_for_past_depends_before_skipping=False, ignore_task_deps=False, ignore_ti_state=False, pool=None, cfg_path=None)[source]

Queues task instance via local or kubernetes executor.

get_task_log(ti, try_number)[source]

Fetch task log from kubernetes executor.

has_task(task_instance)[source]

Check if a task is either queued or running in either local or kubernetes executor.

Parameters

task_instance (airflow.models.taskinstance.TaskInstance) – TaskInstance

Returns

True if the task is known to this executor

Return type

bool

heartbeat()[source]

Heartbeat sent to trigger new jobs in local and kubernetes executor.

get_event_buffer(dag_ids=None)[source]

Return and flush the event buffer from local and kubernetes executor.

Parameters

dag_ids (list[str] | None) – dag_ids to return events for, if None returns all

Returns

a dict of events

Return type

dict[airflow.models.taskinstance.TaskInstanceKey, airflow.executors.base_executor.EventBufferValueType]

try_adopt_task_instances(tis)[source]

Try to adopt running task instances that have been abandoned by a SchedulerJob dying.

Anything that is not adopted will be cleared by the scheduler (and then become eligible for re-scheduling)

Returns

any TaskInstances that were unable to be adopted

Return type

Sequence[airflow.models.taskinstance.TaskInstance]

cleanup_stuck_queued_tasks(tis)[source]
end()[source]

End local and kubernetes executor.

terminate()[source]

Terminate local and kubernetes executor.

debug_dump()[source]

Debug dump; called in response to SIGUSR2 by the scheduler.

send_callback(request)[source]

Send callback for execution.

Parameters

request (airflow.callbacks.callback_requests.CallbackRequest) – Callback request to be executed.

static get_cli_commands()[source]

Was this entry helpful?