Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
0 answers
49 views
AWS DMS table mapping remove specific columns and rename
I'm trying to run AWS DMS with specific table mapping where I have my source table, then I want to rename the table to a new name that matches the target table name as well as not migrate certain columns that I don't need anymore and don't want in the target table. How can I do that? I tried this bu...
I'm trying to run AWS DMS with specific table mapping where I have my source table, then I want to rename the table to a new name that matches the target table name as well as not migrate certain columns that I don't need anymore and don't want in the target table. How can I do that? I tried this but it doesnt't work. Is this the correct approach?: { "rules": [ { "rule-type": "selection", "rule-id": "1", "rule-name": "select-team_assignments", "object-locator": { "schema-name": "source_schema", "table-name": "proj_team_assignments" }, "rule-action": "include" }, { "rule-type": "transformation", "rule-id": "2", "rule-name": "rename-table", "rule-action": "rename", "object-locator": { "schema-name": "source_schema", "table-name": "proj_team_assignments" }, "value": "team_assignments" }, { "rule-type": "transformation", "rule-id": "3", "rule-name": "drop-columns", "rule-action": "remove-columns", "object-locator": { "schema-name": "source_schema", "table-name": "proj_team_assignments" }, "value": [ "team_relation_type", "issue_tracking_id", "analysis_skills", "communication_oral", "coaching_others", "problem_solving", "written_communication", "standard_work_mgmt", "recommendation_synthesis" ] } ] } Also do column types have to match one to one or can they be a bit different in terms of like instead of text I want a json field for that or instead of int, I want a string? Thanks!
Foobar (1 rep)
May 6, 2025, 11:35 PM
1 votes
2 answers
10059 views
AWS DMS Fails with "Command failed to load data with exit error code 1"
I am trying to Migrate my Postgresql RDS to Aurora Postgresql using DMS. All my tasks are failing, and I am unable to understand what can be done to help the tasks progress. The error from cloud watch logs is ```sql ... 2019-05-29T15:05:48 [TASK_MANAGER ]I: Start loading table 'public'.'my_ios_devic...
I am trying to Migrate my Postgresql RDS to Aurora Postgresql using DMS. All my tasks are failing, and I am unable to understand what can be done to help the tasks progress. The error from cloud watch logs is
...
2019-05-29T15:05:48 [TASK_MANAGER ]I: Start loading table 'public'.'my_ios_device_table' (Id = 642) by subtask 3. Start load timestamp 00058A081DCF0B40 (replicationtask_util.c:707)
2019-05-29T15:05:48 [SOURCE_UNLOAD ]I: Calculated batch used for UNLOAD size is 1000 rows per fetch. (postgres_endpoint_unload.c:178)
2019-05-29T15:05:48 [SOURCE_UNLOAD ]I: REPLICA IDENTITY information for table 'public'.'my_ios_device_table': Query status='Success' Type='DEFAULT' Description='Old values of the Primary Key columns (if any) will be captured.' (postgres_endpoint_unload.c:191)
2019-05-29T15:05:49 [SOURCE_UNLOAD ]I: Unload finished for table 'public'.'my_ios_device_table' (Id = 642). 1589 rows sent. (streamcomponent.c:3392)
2019-05-29T15:05:49 [TARGET_LOAD ]I: Load finished for table 'public'.'my_ios_device_table' (Id = 642). 1589 rows received. 0 rows skipped. Volume transfered 1558848. (streamcomponent.c:3663)
2019-05-29T15:05:50 [TARGET_LOAD ]E: Command failed to load data with exit error code 1, Command output:   (csv_target.c:981)
2019-05-29T15:05:50 [TARGET_LOAD ]E: Failed to wait for previous run  (csv_target.c:1578)
2019-05-29T15:05:50 [TARGET_LOAD ]E: Failed to load data from csv file.  (odbc_endpoint_imp.c:5648)
2019-05-29T15:05:50 [TARGET_LOAD ]E: Handling End of table 'public'.'my_ios_device_table' loading failed by subtask 3 thread 1  (endpointshell.c:2416)
2019-05-29T15:05:50 [TASK_MANAGER ]W: Table 'public'.'my_ios_device_table' (subtask 3 thread 1) is suspended (replicationtask.c:2356)
...
I have checked the Postgresql Roles, Privileges and even disabled triggers using the SET session_replication_role = 'replica';. I am at my wits end as to why this is failing and how to rectify it. I am migrating from RDS Postgres 9.5.16 to Aurora Postgres 10.7 Any help is appreciated. Thanks DMS Task Details
gvatreya (201 rep)
May 29, 2019, 03:58 PM • Last activity: Mar 31, 2025, 08:08 AM
1 votes
1 answers
2141 views
What does MISSING_TARGET means on Amazon DMS (Database Migration Service) validation?
I'm running Amazon DMS from RDS to Aurora, the error when full restarting (never been successful before) from `awsdms_validation_failures_v1` tables are: ```json [ { "TASK_NAME": "TPPHLQOQH3WTSU27DVRVWS2BAXTQ7E7DYITWJCQ", "TABLE_OWNER": "public", "TABLE_NAME": "fraud_info", "FAILURE_TIME": "2021-10-...
I'm running Amazon DMS from RDS to Aurora, the error when full restarting (never been successful before) from awsdms_validation_failures_v1 tables are:
[
  {
    "TASK_NAME": "TPPHLQOQH3WTSU27DVRVWS2BAXTQ7E7DYITWJCQ",
    "TABLE_OWNER": "public",
    "TABLE_NAME": "fraud_info",
    "FAILURE_TIME": "2021-10-20 13:20:13.031174",
    "KEY_TYPE": "Row",
    "KEY": "{\n\t\"key\":\t[\"9990\"]\n}",
    "FAILURE_TYPE": "MISSING_TARGET"
  }
]
when I query from information schema on target database, it does exists
SELECT *
FROM information_schema.columns
WHERE table_schema = 'public'
	AND table_name   = 'fraud_info'
;
What does it means?
Kokizzu (1403 rep)
Oct 20, 2021, 01:40 PM • Last activity: Mar 8, 2025, 01:07 AM
1 votes
0 answers
208 views
How to troubleshoot logical replication on Postgresql, when it comes to high replication lag?
I have a use case in which we use AWS DMS to stream data to our data lakes (S3), from time to time the replication lag gets out of control, leading to over 500Gb in replication slot lag. It was also observed that when it happens, the wal_sender process is causing an overhead of thousands of write IO...
I have a use case in which we use AWS DMS to stream data to our data lakes (S3), from time to time the replication lag gets out of control, leading to over 500Gb in replication slot lag. It was also observed that when it happens, the wal_sender process is causing an overhead of thousands of write IOps in the RDS instance. Also, the "Oldest Replication Slot" is increasing, and upon checking the pg_replication_stats view I can see that, it is indeed showing a backend_xmin held to a very old "age". The fact that the replication slot is not advancing is also causing a lot of "VaccumDelays" in the RDS instance. I figured I would start the troubleshooting with the basics: 1 - The config for max_wal_senders is set doubled than the number max_replication_slots. Check 2 - Networking latency is very low on source and target about 3 milliseconds. Check 3 - No resource contention in the DMS replication instances: Lots of cpu, free memory and free storage. Check 4 - RDS logs show no issues in regards to logical replication. Check 5 - Data is being persisted in S3. Check What am I missing? What are the steps for finding the root cause of PostgreSQL high replication lag on async streaming logical replication (FYI I am using test_decoding as the decoder)? The only reason I can think of is that the DMS client is NOT sending the signal to advance the replication slot after decoding the message and persisting it in the target S3; which then causes the replication slot to be stuck on the same LSN. I would love to not depend on the AWS support for finding the root cause of this issue, because the AWS DMS support team has proven very inefficient on assessing issues and providing useful responses when requested. Thanks for your help in advance.
F S (11 rep)
Aug 26, 2024, 02:08 AM • Last activity: Aug 30, 2024, 05:25 PM
0 votes
0 answers
83 views
How does AWS Data Migration Service actually work with MSSQL?
I'm very confused as to how AWS Data Migration Service is actually achieving it's replication from an on prem MS SQL 2019 instance to RDS 2019. I understand from the documentation that tables with a primary key use SQL Server replication, and those without use SQL Server Change Data Capture. I have...
I'm very confused as to how AWS Data Migration Service is actually achieving it's replication from an on prem MS SQL 2019 instance to RDS 2019. I understand from the documentation that tables with a primary key use SQL Server replication, and those without use SQL Server Change Data Capture. I have a trial run set up that consists of one table with a primary key and one table without. It is successfully replicating to the RDS instance. All inserts, updates and deletes and replicated for the table with the primary key. All Inserts and deletes are replicated for the table with no primary key, but updates are not. I can see a replication publication has been set up, but there are no subscription. It is clearly able to replicate because the data is moving to RDS, but I don't see how without a subscription. CDC does NOT appear to have been set up. There are no cdc tables created in the database and is_cdc_enabled is still 0 in sys.databases. Yet the data is replicating to RDS - at least inserts and deletes. I did read in my attempts to understand this that AWS holds a transaction open and can read the subsequent log requests, but I can't see any open transactions in sys.dm_exec_requests. Both methods of replication seem to be happening by wizardry to me at the moment. Can anyone shed any light on mechanically what is going on here?
RPD (31 rep)
Mar 28, 2024, 04:56 AM • Last activity: Apr 2, 2024, 01:57 PM
0 votes
1 answers
76 views
aurora mysql - migrating large tables
We have write intensive application using aurora mysql 3.02 with 2xlarge instance size in prod. I want to migrate 10 tables (biggest one being 450GB) from 2nd aurora mysql into this prod db. Tried using DMS, which affects the write process due to db load. On average this db have 2500-4000 iops. Ques...
We have write intensive application using aurora mysql 3.02 with 2xlarge instance size in prod. I want to migrate 10 tables (biggest one being 450GB) from 2nd aurora mysql into this prod db. Tried using DMS, which affects the write process due to db load. On average this db have 2500-4000 iops. Question is, if i change the instance class to bigger size, say 8xlarge. will it help, DMS & write process to get through without any issues. If the 8xlarge helps resolve the issue, would it be recommended to run DMS with indexes on the target table OR create indexes after data is migrated? Is there any suggestions, do's & dont's on this scenario from your experience? please advise
JollyRoger (11 rep)
Dec 3, 2023, 02:04 AM • Last activity: Dec 5, 2023, 12:37 AM
0 votes
0 answers
139 views
Data validation for RDS PostgreSQL migration
I'm planning to replace a bunch of RDS PostgreSQL databases using [this method defined by AWS][1]. [![Migration method][2]][2] The DMS task will run will be CDC-only. I came up with data validation requirements for the migration and am open to feedback. # Requirements 1. If a table **has a PK**, val...
I'm planning to replace a bunch of RDS PostgreSQL databases using this method defined by AWS . Migration method The DMS task will run will be CDC-only. I came up with data validation requirements for the migration and am open to feedback. # Requirements 1. If a table **has a PK**, validate it using AWS DMS. 2. If a table has a column that would be considered LOB, such as JSON, validate the column by hashing specific column on the source and target, then diffing the sums (method A). 3. If a table doesn’t have a primary key but has a column that is effectively unique, but not declared as UNIQUE in the schema, validate the table using WbDataDiff (method B). The effectively unique column will be specified as the alternate key (--alternateKey). 4. If a table doesn’t have a primary key and has no unique columns, validate the table by hashing a set of columns with highly variable data on the source and target, then diffing the sums (method A). # Validation Methods A. Hash a single column or set of columns with highly variable data and diff. - This method can validate LOB migration, e.g. JSON columns. - This method can validate tables that don't have a primary key or any effectively UNIQUE column. - SQL below came from this source .
SELECT
    -- Prepends an x the extracted first 8 characters of the hash
    -- (generates a hash string; the purpose of the x is so that postgres interprets them as hex strings when casting to a number)
    -- then converts the string to a 32 bit int
    -- Finally, all of the ints are summed
    sum(('x' || SUBSTRING(hash, 1, 8)) :: BIT(32) :: BIGINT) as sum1,
    -- Does the same as the column above, but with the next 8 characters
    sum(('x' || SUBSTRING(hash, 9, 8)) :: BIT(32) :: BIGINT) as sum2,
    sum(('x' || SUBSTRING(hash, 17, 8)) :: BIT(32) :: BIGINT) as sum3,
    sum(('x' || SUBSTRING(hash, 25, 8)) :: BIT(32) :: BIGINT) as sum4
FROM
    (
        SELECT
            md5 (
                -- When the column value is null, use a space
                COALESCE(md5('address' :: TEXT), ' ') ||
                COALESCE(md5('date'::TEXT), ' ')
            ) AS hash
        FROM
            example_table
    ) AS t;
B. Use SQLWorkbench/J’s WbDataDiff data comparison tool to validate tables that don't have a PK or UNIQUE key. This method can show exact differences in the data between the source and target. - The tool accepts an alternative key. The alternative key does not need to be declared as UNIQUE in the schema.
WbDataDiff -referenceProfile="source"
           -targetProfile="target"
           -referenceTables=lerg1
           -file=lerg1_diff.sql
           -includeDelete=true
           -alternateKey='lerg1=ocn_no'
           -excludeRealPK=true
           -showProgress=true
Trouble Bucket (159 rep)
Sep 11, 2023, 08:38 PM
1 votes
1 answers
488 views
Time zone conversion in DMS from DB2 to Aurora Postgres
## Challenge We are using [AWS DMS](https://aws.amazon.com/dms/) to convert data from IBM DB2 into [AWS Auroa Postgres](https://aws.amazon.com/rds/aurora/features/) IBM DB2 server is set to `Eastern` time (ET) zone. Timestamp related data inside DB2 is therefore in ET. ## Findings AWS DMS basically...
## Challenge We are using [AWS DMS](https://aws.amazon.com/dms/) to convert data from IBM DB2 into [AWS Auroa Postgres](https://aws.amazon.com/rds/aurora/features/) IBM DB2 server is set to Eastern time (ET) zone. Timestamp related data inside DB2 is therefore in ET. ## Findings AWS DMS basically functions by making a connection to the source (DB2) and the destination (PG) and copying the data from one to the other. DMS is connecting to PG and appears to be setting the connection time zone to UTC. We have tried changing the PG server config time zone to America/New_York to try and get the DMS connection to default to that, but it won't - we surmise that DMS is specifically setting its connection to UTC. We are fairly certain that if we could get DMS to connect as ET that this would all work perfectly, however we can't seem to find a way to do that. ## Other Ideas Is there some server configuration that might be able to **force** a client to use a certain session time zone? We are trying to avoid post-load conversion of the data due to time constraints. We were able to get this working by have PG convert the data as part of a without time zone to with time zone ALTER as mentioned in this related [question](https://dba.stackexchange.com/questions/322904/alter-timestamptz-column-to-timestamp-without-converting-data/322907) - but that is a post-load conversion which we are trying to avoid. AWS DMS [transformations](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions.html) are being investigated but we haven't figured out a solution using them yet. AWS DMS allows setting [connection options](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html) for [MySQL destinations](https://aws.amazon.com/premiumsupport/knowledge-center/dms-migrate-mysql-non-utc/) , but not PG 😔 Any ideas on this would be appreciated.
Padraic Renaghan (73 rep)
Feb 2, 2023, 01:53 PM • Last activity: Aug 2, 2023, 01:49 PM
3 votes
1 answers
1832 views
Getting Error 1236 (Could not find first log file name in binary log index file) reading binlog in AWS DMS
I have created two DMS task (full load + CDC) with RDS MYSQL 'Read' replica as source endpoint and REDSHIFT as target endpoint. For both the task source and target endpoint is same. One task with only 3 tables is running fine while another task with 400+ tables is failing with below error- > Error 1...
I have created two DMS task (full load + CDC) with RDS MYSQL 'Read' replica as source endpoint and REDSHIFT as target endpoint. For both the task source and target endpoint is same. One task with only 3 tables is running fine while another task with 400+ tables is failing with below error- > Error 1236 (Could not find first log file name in binary log index file) reading binlog in AWS DMS. If I resume the task then it fails. However if I restart then it runs for a few days (6-7 days) and again it fails. I have increased the log retention period to 24 hrs in both master and read replica of MySQL RDS instance but no luck. Please assist me in resolving this issue. Thanks in advance.
Sumant Kumar (83 rep)
Dec 3, 2022, 12:58 AM • Last activity: Dec 5, 2022, 07:55 AM
1 votes
0 answers
189 views
aws documentation for DMS CDC is confusing
I am following this documentation https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Security to enable CDC using logical replication. I have pglogical extension installed and added under "shared_preload_libraries" parameter. Replication slot created...
I am following this documentation https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Security to enable CDC using logical replication. I have pglogical extension installed and added under "shared_preload_libraries" parameter. Replication slot created fine - SELECT * FROM pg_create_logical_replication_slot('test_slot', 'pglogical'); but why do i get this error when creating replication set as exactly mentioned in the document? select pglogical.create_replication_set('test_slot', false, true, true, false); ERROR: current database is not configured as pglogical node HINT: create pglogical node first However, Creating pglogical node step is ONLY covered under a different topic of "Using native CDC..." in the same URL - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.v10 Can someone please explain the right steps on how to enable logical replication via AWS DMS CDC? is native cdc different than DMS CDC?
Falcon (101 rep)
Sep 19, 2022, 12:31 AM
0 votes
0 answers
540 views
Will SQLNET.EXPIRE_TIME=5 prevent AWS Gateway Load Balancer from timing out after 350 seconds?
We have Amazon Web Services (AWS) Database Migration Service (DMS) tasks erroring out against an on premises Oracle 11.2.0.3 database with ORA-03135 errors. Our network traffic between AWS and the database goes through AWS's Gateway Load Balancer (GWLB). In an AWS support case about Debezium we lear...
We have Amazon Web Services (AWS) Database Migration Service (DMS) tasks erroring out against an on premises Oracle 11.2.0.3 database with ORA-03135 errors. Our network traffic between AWS and the database goes through AWS's Gateway Load Balancer (GWLB). In an AWS support case about Debezium we learned that their GWLB has a 350 second timeout that silently drops idle connections on the AWS side while killing them properly on the on premises side. We know that DMS normally communicates with our on premises database more frequently than every 350 seconds but today we seem to have hit the timeout during a period of heavy activity like a large initial load. Oracle's support site says that setting SQLNET.EXPIRE_TIME=5 in our sqlnet.ora on the database server may prevent the 350 second timeout because the 5 minute expire_time setting is less than the almost 6 minute timeout. But Oracle's document also says that expire_time does not work in all cases. My question is whether anyone has experienced this scenario and knows whether setting the sqlnet.expire_time=5 will help with AWS DMS and AWS GWLB and its 350 second timeout? Also, are there any TCP Keepalive settings that we could set within DMS to overcome this timeout? Thanks! Bobby
Bobby Durrett (231 rep)
Jun 13, 2022, 11:48 PM • Last activity: Jun 15, 2022, 11:10 PM
3 votes
1 answers
7070 views
AWS DMS endpoint test connection error
I am trying to migrate Microsoft SQL Server databases from on-premise to an RDS instance. I have created a SQL Server 2014 Express Edition instance in RDS and I have SQL Server 2014 Express Edition on premise in in my corporate network. I am able to connect to the RDS instance from my on premise SSM...
I am trying to migrate Microsoft SQL Server databases from on-premise to an RDS instance. I have created a SQL Server 2014 Express Edition instance in RDS and I have SQL Server 2014 Express Edition on premise in in my corporate network. I am able to connect to the RDS instance from my on premise SSMS. While using the DMS service to migrate and creating endpoints, I am getting below error. > Error Details: > [errType=ERROR_RESPONSE, status=1022506, errMessage=Failed to connect Network error has occurred, errDetails= RetCode: SQL_ERROR SqlState: HYT00 NativeError: 0 Message: [unixODBC][Microsoft][ODBC Driver 13 for SQL Server]Login timeout expired ODBC general error.]] I am getting the same error even when I am testing the target endpoint which is an RDS instance. I have kept RDS and replication instance on the same VPC. Port 1433 is open in my system and as I am able to connect to the RDS instance via SSMS so there shouldn't be a port issue. Please help.
mohan sahu (31 rep)
Feb 28, 2018, 06:03 AM • Last activity: Feb 28, 2022, 03:20 PM
0 votes
0 answers
1506 views
AWS DMS Task failed with error: Error executing source loop; Stream component failed at subtask 0
I want to migrate my PostgresDB hosted in Citus cloud service to AWS RDS Aurora Postgres. I am using AWS DMS service. Have created task but getting following errors: Last failure message Last Error Stream Component Fatal error. Task error notification received from subtask 0, thread 0 [reptask/repli...
I want to migrate my PostgresDB hosted in Citus cloud service to AWS RDS Aurora Postgres. I am using AWS DMS service. Have created task but getting following errors: Last failure message Last Error Stream Component Fatal error. Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2860] Error executing source loop; Stream component failed at subtask 0, component st_0_QOIS7XIGJDKNPY6RXMGYRLJQHY2P7IQBWIBA5NQ; Stream component 'st_0_QOIS7XIGJDKNPY6RXMGYRLJQHY2P7IQBWIBA5NQ' terminated [reptask/replicationtask.c:2868] Stop Reason FATAL_ERROR Error Level FATAL Frankly speaking not able to understand what is wrong here, so any help is appreciated. cloudwatch logs:enter image description here
Ashish Karpe (101 rep)
Oct 15, 2021, 10:48 AM
0 votes
1 answers
825 views
How to set PostgreSQL's permission when migrate other database to it on AWS?
We want to migrate an Oracle DB to a PostgreSQL DB in RDS with DMS. There is an official guide for preparision in the PostgreSQL's DB: https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle2postgresql.steps.configurepostgresql.html When I do with postgres user - 1 ALTER USER postgresql_dms_user WITH...
We want to migrate an Oracle DB to a PostgreSQL DB in RDS with DMS. There is an official guide for preparision in the PostgreSQL's DB: https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle2postgresql.steps.configurepostgresql.html When I do with postgres user - 1 ALTER USER postgresql_dms_user WITH SUPERUSER; Got postgres=> ALTER USER postgresql_dms_user WITH SUPERUSER; ERROR: must be superuser to alter superusers When I do a grant to sct user, got - 2 postgres=> GRANT ALL ON SEQUENCES IN SCHEMA main TO postgresql_sct_user; ERROR: syntax error at or near "IN" Line 1: GRANT ALL ON SEQUENCES IN SCHEMA main TO postgresql_sct_user... --- About 1, AWS' Aurora PostgreSQL' postgres user isn't the superuser Role name | Group postgres | {rds_superuser} Then how to grant a superuser to a new user? About 2, I'm using PostgreSQL 12 version in the RDS, does the syntax not supported?
Miantian (177 rep)
Oct 5, 2021, 09:12 AM • Last activity: Oct 6, 2021, 01:20 AM
0 votes
0 answers
421 views
Continous data replication between two databases(same RDS instance) in AWS RDS
We have multiple databases in a single RDS instance. Forex A, B, C A and B DB contains the plans table. But some of the fields will be missing under the plans table in B DB. What is the smart way to keep these tables in sync(like continuous replication)? I check AWS DMS When I created a source point...
We have multiple databases in a single RDS instance. Forex A, B, C A and B DB contains the plans table. But some of the fields will be missing under the plans table in B DB. What is the smart way to keep these tables in sync(like continuous replication)? I check AWS DMS When I created a source point using the RDS instance for A DB. It gets successfully created. But when I tried to create a target endpoint for the same RDS instance with a different DB. It is throwing this error. SYSTEM ERROR MESSAGE:ReplicationEndpoint "database-1" already in use with endpoint ARN "arn:aws:dms:ap-southeast-1:59351853185:endpoint:VLRTOQM7LBW5M7YUKPOENBTFDUC32N7B5737I" Am I looking into the right tool or is there any other tool available for this scenario? Any insight on how to achieve this will be greatly appreciated. Thanks
sdg (101 rep)
Jul 31, 2021, 09:01 AM
0 votes
0 answers
1401 views
AWS DMS migration not migrated Foreign keys, sequences and indexes got renamed
I have used DMS for AURORA PostgreSQL to PostgreSQL SCHEMA Migration. Migrate Data only type is used for DMS activity. After a Full load of Data, all tables were created on the target endpoint with all data along with it. The problem is when we cross-checked with target DB, It hasn't had Foreign Key...
I have used DMS for AURORA PostgreSQL to PostgreSQL SCHEMA Migration. Migrate Data only type is used for DMS activity. After a Full load of Data, all tables were created on the target endpoint with all data along with it. The problem is when we cross-checked with target DB, It hasn't had Foreign Keys, Sequences in target, and Some Indexes got renamed. Also, DDL doesnt have the default statements, and Defaults in tables are blank. Does DMS not take of these?
aditya Mehta (1 rep)
Jun 17, 2021, 12:30 PM
1 votes
2 answers
2780 views
How does AWS DMS on-going replication works internally?
From the [documentation][1] it's mentioned that "DMS collects changes to the database logs using the database engine's native API" and replicates to target. But, I didn't see anywhere that, at what rate it replicates from source to target in terms of number of records and is there any way we can con...
From the documentation it's mentioned that "DMS collects changes to the database logs using the database engine's native API" and replicates to target. But, I didn't see anywhere that, at what rate it replicates from source to target in terms of number of records and is there any way we can control this settings or can I know each time how many rows it replicates?
Pand005 (151 rep)
Jan 5, 2021, 10:48 PM • Last activity: Jan 6, 2021, 12:44 PM
0 votes
1 answers
62 views
Migrate Oracle 12c to 18c in AWS RDS
Currently we have an Amazon RDS Oracle 12c instance and we need to upgrade it to Oracle 18c in a **new** RDS instance. What is the correct approach to do this? Can we set up an 18c instance staright away and then restore a 12c snapshot or do we need to Set up a new 12c instance and restore a snaphot...
Currently we have an Amazon RDS Oracle 12c instance and we need to upgrade it to Oracle 18c in a **new** RDS instance. What is the correct approach to do this? Can we set up an 18c instance staright away and then restore a 12c snapshot or do we need to Set up a new 12c instance and restore a snaphot and then upgrade to 18c? Can we use AWS DMS to migrate from RDS 12c to RDS 18c?
thanuja (147 rep)
Jun 4, 2020, 07:09 AM • Last activity: Jun 8, 2020, 11:11 PM
1 votes
0 answers
296 views
How to modify the block size of my S3 parquet files that are being queried in Athena?
I read that adjusting the block size of the parquet files being queried with Athena can affect and possibly improve the performance of the queries. My parquet files for my database are currently created with DMS (hooked into MS SQL Server as the source). Is it a property of my DMS job I need to chan...
I read that adjusting the block size of the parquet files being queried with Athena can affect and possibly improve the performance of the queries. My parquet files for my database are currently created with DMS (hooked into MS SQL Server as the source). Is it a property of my DMS job I need to change, the parquet files in my S3 bucket themselves, or would I need to create an ETL job with Glue to modify the block sizes after they're created?
J.D. (40893 rep)
Jun 4, 2020, 11:08 PM
3 votes
1 answers
5498 views
Do not replicate Delete operations AWS DMS
I am using **AWS DMS** to replicate ongoing changes with **SQL Server** as a source and target endpoint. The tasks are running and replicating data with low latency. However, I need the tasks configured not to replicate DELETE operations from the source database. I did read the whole documentation o...
I am using **AWS DMS** to replicate ongoing changes with **SQL Server** as a source and target endpoint. The tasks are running and replicating data with low latency. However, I need the tasks configured not to replicate DELETE operations from the source database. I did read the whole documentation of the tool and I wasn't able to find such a configuration option to exclude **DELETE** statements for replicating. There is only one place in the whole user guide where it states that if one is performing a **Full Load plus CDC** or a **CDC-only** task, it is recommended the migration to be paused and secondary indexes to be created that will support filtering for update and delete statements. But no more, no further explanation. Any help or experience is highly appreciated, please.
tOpsDvp (41 rep)
Aug 13, 2019, 12:58 PM • Last activity: May 18, 2020, 10:05 PM
Showing page 1 of 20 total questions