Sample Header Ad - 728x90

Practices to maintain historical data via replication?

0 votes
0 answers
69 views
We have a use case like of banks, where we generate large number of transactions, say in transaction table and few other tables with large number of inserts, every day. Transactions are most inserted and updated within a day or two. Our database is Oracle 12c(planned upgrade to 19). Coming to use case, we need to set up a big data cluster(Oracle or mongo/cassandra) ,say , X so we can support people fetching account statement of older time. So, in our primary(and data guard replicated standby) Oracle database, we should keep 3 months of data and people querying for data older than 3 months must use cluster X. What are the industry practices for building such replications? Process must be: - Stable,resilient and well monitored - Cluster X must be scalable - Highly consistent and must not ever loose data What are our initial thoughts: Set up oracle goldengate pipeline from source DB to a y-nodes sharded oracle DB cluster. Although Oracle19c sharded db does supports adding a node with auto balancing, I'm reluctant to use a relational db. Not using relational DB and using replication to DBs like Mongo/cassandra also making me skeptical of instabilities in CDC pipelines, NoSQL inconsistency. What does banks do for such cases?
Asked by Sachin Verma (839 rep)
Jan 7, 2024, 05:00 PM
Last activity: Jan 7, 2024, 05:05 PM