Amazon Redshift Data¶
Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers.
Prerequisite Tasks¶
To use these operators, you must do a few things:
Create necessary resources using AWS Console or AWS CLI.
Install API libraries via pip.
pip install 'apache-airflow[amazon]'Detailed information is available Installation of Airflow®
Generic Parameters¶
- aws_conn_id
Reference to Amazon Web Services Connection ID. If this parameter is set to
None
then the default boto3 behaviour is used without a connection lookup. Otherwise use the credentials stored in the Connection. Default:aws_default
- region_name
AWS Region Name. If this parameter is set to
None
or omitted then region_name from AWS Connection Extra Parameter will be used. Otherwise use the specified value instead of the connection value. Default:None
- verify
Whether or not to verify SSL certificates.
False
- Do not validate SSL certificates.path/to/cert/bundle.pem - A filename of the CA cert bundle to use. You can specify this argument if you want to use a different CA cert bundle than the one used by botocore.
If this parameter is set to
None
or is omitted then verify from AWS Connection Extra Parameter will be used. Otherwise use the specified value instead of the connection value. Default:None
- botocore_config
The provided dictionary is used to construct a botocore.config.Config. This configuration can be used to configure Avoid Throttling exceptions, timeouts, etc.
{ "signature_version": "unsigned", "s3": { "us_east_1_regional_endpoint": True, }, "retries": { "mode": "standard", "max_attempts": 10, }, "connect_timeout": 300, "read_timeout": 300, "tcp_keepalive": True, }
If this parameter is set to
None
or omitted then config_kwargs from AWS Connection Extra Parameter will be used. Otherwise use the specified value instead of the connection value. Default:None
Note
Specifying an empty dictionary,
{}
, will overwrite the connection configuration for botocore.config.Config
Operators¶
Execute a statement on an Amazon Redshift cluster¶
Use the RedshiftDataOperator
to execute
statements against an Amazon Redshift cluster.
This differs from RedshiftSQLOperator
in that it allows users to query and retrieve data via the AWS API and avoid
the necessity of a Postgres connection.
tests/system/amazon/aws/example_redshift.py
create_table_redshift_data = RedshiftDataOperator(
task_id="create_table_redshift_data",
cluster_identifier=redshift_cluster_identifier,
database=DB_NAME,
db_user=DB_LOGIN,
sql=[
"""
CREATE TABLE IF NOT EXISTS fruit (
fruit_id INTEGER,
name VARCHAR NOT NULL,
color VARCHAR NOT NULL
);
"""
],
poll_interval=POLL_INTERVAL,
wait_for_completion=True,
)
Reuse a session when executing multiple statements¶
Specify the session_keep_alive_seconds
parameter on an upstream task. In a downstream task, get the session ID from
the XCom and pass it to the session_id
parameter. This is useful when you work with temporary tables.
tests/system/amazon/aws/example_redshift.py
create_tmp_table_data_api = RedshiftDataOperator(
task_id="create_tmp_table_data_api",
cluster_identifier=redshift_cluster_identifier,
database=DB_NAME,
db_user=DB_LOGIN,
sql=[
"""
CREATE TEMPORARY TABLE tmp_people (
id INTEGER,
first_name VARCHAR(100),
age INTEGER
);
"""
],
poll_interval=POLL_INTERVAL,
wait_for_completion=True,
session_keep_alive_seconds=600,
)
insert_data_reuse_session = RedshiftDataOperator(
task_id="insert_data_reuse_session",
sql=[
"INSERT INTO tmp_people VALUES ( 1, 'Bob', 30);",
"INSERT INTO tmp_people VALUES ( 2, 'Alice', 35);",
"INSERT INTO tmp_people VALUES ( 3, 'Charlie', 40);",
],
poll_interval=POLL_INTERVAL,
wait_for_completion=True,
session_id="{{ task_instance.xcom_pull(task_ids='create_tmp_table_data_api', key='session_id') }}",
)