airflow.providers.amazon.aws.sensors.bedrock¶
Classes¶
General sensor behavior for Amazon Bedrock. |
|
Poll the state of the model customization job until it reaches a terminal state; fails if the job fails. |
|
Poll the provisioned model throughput job until it reaches a terminal state; fails if the job fails. |
|
Poll the Knowledge Base status until it reaches a terminal state; fails if creation fails. |
|
Poll the ingestion job status until it reaches a terminal state; fails if creation fails. |
|
Poll the batch inference job status until it reaches a terminal state; fails if creation fails. |
Module Contents¶
- class airflow.providers.amazon.aws.sensors.bedrock.BedrockBaseSensor(deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), **kwargs)[source]¶
Bases:
airflow.providers.amazon.aws.sensors.base_aws.AwsBaseSensor
[_GenericBedrockHook
]General sensor behavior for Amazon Bedrock.
- Subclasses must implement following methods:
get_state()
- Subclasses must set the following fields:
INTERMEDIATE_STATES
FAILURE_STATES
SUCCESS_STATES
FAILURE_MESSAGE
- Parameters:
deferrable (bool) – If True, the sensor will operate in deferrable mode. This mode requires aiobotocore module to be installed. (default: False, but can be overridden in config file by setting default_deferrable to True)
- class airflow.providers.amazon.aws.sensors.bedrock.BedrockCustomizeModelCompletedSensor(*, job_name, max_retries=75, poke_interval=120, **kwargs)[source]¶
Bases:
BedrockBaseSensor
[airflow.providers.amazon.aws.hooks.bedrock.BedrockHook
]Poll the state of the model customization job until it reaches a terminal state; fails if the job fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait for an Amazon Bedrock customize model job
- Parameters:
job_name (str) – The name of the Bedrock model customization job.
deferrable – If True, the sensor will operate in deferrable mode. This mode requires aiobotocore module to be installed. (default: False, but can be overridden in config file by setting default_deferrable to True)
poke_interval (int) – Polling period in seconds to check for the status of the job. (default: 120)
max_retries (int) – Number of times before returning the current state. (default: 75)
aws_conn_id – The Airflow connection used for AWS credentials. If this is
None
or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).region_name – AWS region_name. If not specified then the default boto3 behaviour is used.
verify – Whether or not to verify SSL certificates. See: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html
- template_fields: collections.abc.Sequence[str][source]¶
- class airflow.providers.amazon.aws.sensors.bedrock.BedrockProvisionModelThroughputCompletedSensor(*, model_id, poke_interval=60, max_retries=20, **kwargs)[source]¶
Bases:
BedrockBaseSensor
[airflow.providers.amazon.aws.hooks.bedrock.BedrockHook
]Poll the provisioned model throughput job until it reaches a terminal state; fails if the job fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait for an Amazon Bedrock provision model throughput job
- Parameters:
model_id (str) – The ARN or name of the provisioned throughput.
deferrable – If True, the sensor will operate in deferrable more. This mode requires aiobotocore module to be installed. (default: False, but can be overridden in config file by setting default_deferrable to True)
poke_interval (int) – Polling period in seconds to check for the status of the job. (default: 60)
max_retries (int) – Number of times before returning the current state (default: 20)
aws_conn_id – The Airflow connection used for AWS credentials. If this is
None
or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).region_name – AWS region_name. If not specified then the default boto3 behaviour is used.
verify – Whether or not to verify SSL certificates. See: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html
- template_fields: collections.abc.Sequence[str][source]¶
- class airflow.providers.amazon.aws.sensors.bedrock.BedrockKnowledgeBaseActiveSensor(*, knowledge_base_id, poke_interval=5, max_retries=24, **kwargs)[source]¶
Bases:
BedrockBaseSensor
[airflow.providers.amazon.aws.hooks.bedrock.BedrockAgentHook
]Poll the Knowledge Base status until it reaches a terminal state; fails if creation fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait for an Amazon Bedrock Knowledge Base
- Parameters:
knowledge_base_id (str) – The unique identifier of the knowledge base for which to get information. (templated)
deferrable – If True, the sensor will operate in deferrable more. This mode requires aiobotocore module to be installed. (default: False, but can be overridden in config file by setting default_deferrable to True)
poke_interval (int) – Polling period in seconds to check for the status of the job. (default: 5)
max_retries (int) – Number of times before returning the current state (default: 24)
aws_conn_id – The Airflow connection used for AWS credentials. If this is
None
or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).region_name – AWS region_name. If not specified then the default boto3 behaviour is used.
verify – Whether or not to verify SSL certificates. See: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html
- template_fields: collections.abc.Sequence[str][source]¶
- class airflow.providers.amazon.aws.sensors.bedrock.BedrockIngestionJobSensor(*, knowledge_base_id, data_source_id, ingestion_job_id, poke_interval=60, max_retries=10, **kwargs)[source]¶
Bases:
BedrockBaseSensor
[airflow.providers.amazon.aws.hooks.bedrock.BedrockAgentHook
]Poll the ingestion job status until it reaches a terminal state; fails if creation fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait for an Amazon Bedrock ingestion job to finish
- Parameters:
knowledge_base_id (str) – The unique identifier of the knowledge base for which to get information. (templated)
data_source_id (str) – The unique identifier of the data source in the ingestion job. (templated)
ingestion_job_id (str) – The unique identifier of the ingestion job. (templated)
deferrable – If True, the sensor will operate in deferrable more. This mode requires aiobotocore module to be installed. (default: False, but can be overridden in config file by setting default_deferrable to True)
poke_interval (int) – Polling period in seconds to check for the status of the job. (default: 60)
max_retries (int) – Number of times before returning the current state (default: 10)
aws_conn_id – The Airflow connection used for AWS credentials. If this is
None
or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).region_name – AWS region_name. If not specified then the default boto3 behaviour is used.
verify – Whether or not to verify SSL certificates. See: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html
- template_fields: collections.abc.Sequence[str][source]¶
- class airflow.providers.amazon.aws.sensors.bedrock.BedrockBatchInferenceSensor(*, job_arn, success_state=SuccessState.SCHEDULED, poke_interval=120, max_retries=75, **kwargs)[source]¶
Bases:
BedrockBaseSensor
[airflow.providers.amazon.aws.hooks.bedrock.BedrockHook
]Poll the batch inference job status until it reaches a terminal state; fails if creation fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait for an Amazon Bedrock batch inference job
- Parameters:
job_arn (str) – The Amazon Resource Name (ARN) of the batch inference job. (templated)
success_state (SuccessState | str) – A BedrockBatchInferenceSensor.TargetState; defaults to ‘SCHEDULED’ (templated)
deferrable – If True, the sensor will operate in deferrable more. This mode requires aiobotocore module to be installed. (default: False, but can be overridden in config file by setting default_deferrable to True)
poke_interval (int) – Polling period in seconds to check for the status of the job. (default: 5)
max_retries (int) – Number of times before returning the current state (default: 24)
aws_conn_id – The Airflow connection used for AWS credentials. If this is
None
or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).region_name – AWS region_name. If not specified then the default boto3 behaviour is used.
verify – Whether or not to verify SSL certificates. See: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html
botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html
- class SuccessState[source]¶
Target state for the BedrockBatchInferenceSensor.
Bedrock adds batch inference jobs to a queue, and they may take some time to complete. If you want to wait for the job to complete, use TargetState.COMPLETED, but if you only want to wait until the service confirms that the job is in the queue, use TargetState.SCHEDULED.
- The normal successful progression of states is:
Submitted > Validating > Scheduled > InProgress > PartiallyCompleted > Completed
- FAILURE_STATES: tuple[str, Ellipsis] = ('Failed', 'Stopped', 'PartiallyCompleted', 'Expired')[source]¶
- template_fields: collections.abc.Sequence[str][source]¶