Learn About Amazon VGT2 Learning Manager Chanci Turner
Migrating a Transparent Data Encryption (TDE)-enabled Amazon Relational Database Service (RDS) for Oracle database instance from one AWS account to another is a prevalent scenario, especially during mergers, acquisitions, or organizational restructuring. TDE is a permanent feature in RDS for Oracle option groups; however, a DB snapshot utilizing an option group with either permanent or persistent settings cannot be shared across AWS accounts. For further details, refer to our blog post on sharing DB snapshots. In this article, we outline the necessary steps and best practices to facilitate the migration of a TDE-enabled RDS for Oracle DB instance between AWS accounts while minimizing downtime. We will utilize Oracle Data Pump in conjunction with AWS Database Migration Service (AWS DMS). Notably, AWS DMS does not support TDE at the column level, so we will address additional steps required for migrating tables encrypted with TDE at that level.
This migration process is divided into three key phases:
- Initial Data Load: The initial data load is executed using Oracle Data Pump on the TDE-enabled RDS for Oracle DB instance.
- Ongoing Replication: AWS DMS, a fully managed service, supports change data capture (CDC) for RDS for Oracle DB instances, allowing for ongoing data replication and a reduced outage window.
- Migrating TDE-Encrypted Tables During Cutover: Since AWS DMS does not replicate tables with column-level TDE, you will need to refresh these tables at the time of cutover.
Solution Overview
In this post, we employ the Oracle Data Pump API for the initial data load and set up an AWS DMS CDC-only task for ongoing replication. The architecture is depicted in the accompanying diagram.
To implement this solution, follow these steps:
- Prepare the source RDS for Oracle DB instance for migration.
- Prepare the target RDS for Oracle DB instance for migration.
- Capture the System Change Number (SCN).
- Export the source database using Oracle Data Pump.
- Transfer the Oracle Data Pump export dump file set to the target DB instance.
- Load data into the target database utilizing Oracle Data Pump.
- Validate the target database.
- Enable Amazon RDS backup retention and archive logging on the target DB instance.
- Configure ongoing replication through AWS DMS.
- Reload tables with column-level TDE and perform the cutover.
In this demonstration, the source AWS account is 6144xxxxxxxx
, and the target account is 2634xxxxxxxx
. Both accounts are part of the same AWS Organization. The source RDS for Oracle DB instance is named rds-oracle-source-01
, while the target is rds-oracle-target-01
, with both running on a non-multitenant Oracle Database 19c version. We utilize AWS DMS for the ongoing replication process. For this article, we will use DMS_SAMPLE
as our sample schema, which includes two tables with TDE-encrypted columns.
Prerequisites
Ensure the following prerequisites are met:
- Connectivity between your source and target AWS accounts must be established, either through VPC peering or AWS Transit Gateway. For more details, see how to create a VPC peering connection.
- The VPC security group linked to both RDS for Oracle instances must permit inbound connections from the AWS DMS replication instance. The security group associated with the replication instance should also allow all outbound connections. Refer to setting up a network for a replication instance for more info.
- Automatic backup should be enabled on the source RDS for Oracle DB instance. More on enabling automatic backups can be found here.
- To capture ongoing changes, AWS DMS requires minimal supplemental logging to be enabled on your Oracle source database, as well as on each replicated table.
- You will need a bastion host with the SQL*Plus client installed that can connect to both the source and target RDS for Oracle instances.
Limitations
This solution comes with specific limitations:
- The AWS DMS Binary Reader method only supports TDE for self-managed Oracle databases.
- When replicating from Amazon RDS for Oracle, TDE is only supported with encrypted tablespaces and Oracle LogMiner.
- AWS DMS supports CDC for RDS for Oracle database tables that have primary keys. If a table lacks a primary key, you must enable supplemental logging on all columns to provide AWS DMS with enough data to update the target table.
- During CDC, AWS DMS only supports large object (LOB) data types in tables with primary keys.
- If your tables employ sequences, those sequences will not advance on the target even while data is being copied during ongoing replication with AWS DMS. Update the NEXTVAL of the sequences in the target database during cutover after ceasing replication from the source.
For further information about limitations when using an Oracle database as a source and target with AWS DMS, check the limitations on using an Oracle database as a source and the limitations on Oracle as a target for AWS Database Migration Service.
Preparing the Source RDS for Oracle DB Instance for Migration
To set up your source DB instance for migration, follow these steps:
- Create a
DMS_USER
account in the source RDS for Oracle database, ensuring you review the necessary privileges. - The source RDS for Oracle DB instance must possess adequate storage to accommodate both the export dump files and archived logs generated during the database export, as well as during the transfer of dump files and their subsequent loading into the destination database instance. We suggest increasing the storage based on the estimated size of the export dump files and archive log generation. Be mindful that Amazon RDS auto scaling cannot completely avert storage-full situations during large data loads. This is due to further storage modifications being restricted for either 6 hours or until storage optimization is completed on the instance, whichever is longer. For more information on these limitations, refer to the limitations section.
You can also estimate the size of the Oracle Data Pump export dump file set using the DBMS_DATAPUMP procedure within a SQL*Plus session:
SQL(source)> DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'EXPORT',
job_mode => 'SCHEMA',
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'estimate_dump_size.log',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''DMS_SAMPLE'')');
DBMS_DATAPUMP.SET_PARAMETER(
handle => v_hdnl,
name => 'ESTIMATE_ONLY',
value => 1
);
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
To further engage with this topic, consider exploring Chanci Turner’s insights on self-defined work in her blog post about the future of work here, or you can check out SHRM for expert job descriptions relevant to this migration process. Finally, for more comprehensive information on employee training at Amazon, visit this excellent resource.