airflow.providers.amazon.aws.operators.ec2

Module Contents

Classes

EC2StartInstanceOperator

Start AWS EC2 instance using boto3.

EC2StopInstanceOperator

Stop AWS EC2 instance using boto3.

EC2CreateInstanceOperator

Create and start a specified number of EC2 Instances using boto3.

EC2TerminateInstanceOperator

Terminate EC2 Instances using boto3.

EC2RebootInstanceOperator

Reboot Amazon EC2 instances.

EC2HibernateInstanceOperator

Hibernate Amazon EC2 instances.

class airflow.providers.amazon.aws.operators.ec2.EC2StartInstanceOperator(*, instance_id, aws_conn_id='aws_default', region_name=None, check_interval=15, **kwargs)[source]

Bases: airflow.models.BaseOperator

Start AWS EC2 instance using boto3.

See also

For more information on how to use this operator, take a look at the guide: Start an Amazon EC2 instance

Parameters
  • instance_id (str) – id of the AWS EC2 instance

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – (optional) aws region name associated with the client

  • check_interval (float) – time in seconds that the job should wait in between each instance state checks until operation is completed

template_fields: Sequence[str] = ('instance_id', 'region_name')[source]
ui_color = '#eeaa11'[source]
ui_fgcolor = '#ffffff'[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

class airflow.providers.amazon.aws.operators.ec2.EC2StopInstanceOperator(*, instance_id, aws_conn_id='aws_default', region_name=None, check_interval=15, **kwargs)[source]

Bases: airflow.models.BaseOperator

Stop AWS EC2 instance using boto3.

See also

For more information on how to use this operator, take a look at the guide: Stop an Amazon EC2 instance

Parameters
  • instance_id (str) – id of the AWS EC2 instance

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – (optional) aws region name associated with the client

  • check_interval (float) – time in seconds that the job should wait in between each instance state checks until operation is completed

template_fields: Sequence[str] = ('instance_id', 'region_name')[source]
ui_color = '#eeaa11'[source]
ui_fgcolor = '#ffffff'[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

class airflow.providers.amazon.aws.operators.ec2.EC2CreateInstanceOperator(image_id, max_count=1, min_count=1, aws_conn_id='aws_default', region_name=None, poll_interval=20, max_attempts=20, config=None, wait_for_completion=False, **kwargs)[source]

Bases: airflow.models.BaseOperator

Create and start a specified number of EC2 Instances using boto3.

See also

For more information on how to use this operator, take a look at the guide: Create and start an Amazon EC2 instance

Parameters
  • image_id (str) – ID of the AMI used to create the instance.

  • max_count (int) – Maximum number of instances to launch. Defaults to 1.

  • min_count (int) – Minimum number of instances to launch. Defaults to 1.

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – AWS region name associated with the client.

  • poll_interval (int) – Number of seconds to wait before attempting to check state of instance. Only used if wait_for_completion is True. Default is 20.

  • max_attempts (int) – Maximum number of attempts when checking state of instance. Only used if wait_for_completion is True. Default is 20.

  • config (dict | None) – Dictionary for arbitrary parameters to the boto3 run_instances call.

  • wait_for_completion (bool) – If True, the operator will wait for the instance to be in the running state before returning.

template_fields: Sequence[str] = ('image_id', 'max_count', 'min_count', 'aws_conn_id', 'region_name', 'config', 'wait_for_completion')[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

on_kill()[source]

Override this method to clean up subprocesses when a task instance gets killed.

Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up, or it will leave ghost processes behind.

class airflow.providers.amazon.aws.operators.ec2.EC2TerminateInstanceOperator(instance_ids, aws_conn_id='aws_default', region_name=None, poll_interval=20, max_attempts=20, wait_for_completion=False, **kwargs)[source]

Bases: airflow.models.BaseOperator

Terminate EC2 Instances using boto3.

See also

For more information on how to use this operator, take a look at the guide: Terminate an Amazon EC2 instance

Parameters
  • instance_id – ID of the instance to be terminated.

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – AWS region name associated with the client.

  • poll_interval (int) – Number of seconds to wait before attempting to check state of instance. Only used if wait_for_completion is True. Default is 20.

  • max_attempts (int) – Maximum number of attempts when checking state of instance. Only used if wait_for_completion is True. Default is 20.

  • wait_for_completion (bool) – If True, the operator will wait for the instance to be in the terminated state before returning.

template_fields: Sequence[str] = ('instance_ids', 'region_name', 'aws_conn_id', 'wait_for_completion')[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

class airflow.providers.amazon.aws.operators.ec2.EC2RebootInstanceOperator(*, instance_ids, aws_conn_id='aws_default', region_name=None, poll_interval=20, max_attempts=20, wait_for_completion=False, **kwargs)[source]

Bases: airflow.models.BaseOperator

Reboot Amazon EC2 instances.

See also

For more information on how to use this operator, take a look at the guide: Reboot an Amazon EC2 instance

Parameters
  • instance_ids (str | list[str]) – ID of the instance(s) to be rebooted.

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – AWS region name associated with the client.

  • poll_interval (int) – Number of seconds to wait before attempting to check state of instance. Only used if wait_for_completion is True. Default is 20.

  • max_attempts (int) – Maximum number of attempts when checking state of instance. Only used if wait_for_completion is True. Default is 20.

  • wait_for_completion (bool) – If True, the operator will wait for the instance to be in the running state before returning.

template_fields: Sequence[str] = ('instance_ids', 'region_name')[source]
ui_color = '#eeaa11'[source]
ui_fgcolor = '#ffffff'[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

class airflow.providers.amazon.aws.operators.ec2.EC2HibernateInstanceOperator(*, instance_ids, aws_conn_id='aws_default', region_name=None, poll_interval=20, max_attempts=20, wait_for_completion=False, **kwargs)[source]

Bases: airflow.models.BaseOperator

Hibernate Amazon EC2 instances.

See also

For more information on how to use this operator, take a look at the guide: Hibernate an Amazon EC2 instance

Parameters
  • instance_ids (str | list[str]) – ID of the instance(s) to be hibernated.

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – AWS region name associated with the client.

  • poll_interval (int) – Number of seconds to wait before attempting to check state of instance. Only used if wait_for_completion is True. Default is 20.

  • max_attempts (int) – Maximum number of attempts when checking state of instance. Only used if wait_for_completion is True. Default is 20.

  • wait_for_completion (bool) – If True, the operator will wait for the instance to be in the stopped state before returning.

template_fields: Sequence[str] = ('instance_ids', 'region_name')[source]
ui_color = '#eeaa11'[source]
ui_fgcolor = '#ffffff'[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

Was this entry helpful?