Slow inserts in oracle 12c inmemory table
3
votes
1
answer
2232
views
We are running some tests to check the in-memory performance on a **12c (12.1.0.2.4 EE) RAC**. Servers have 56GB memory and 20core CPUs.
Our plan is to have a few read-performance critical tables in memory and the rest in disk. The test was to first populate the tables using our insert tool and then run queries on it using JMeter (web application benchmark tool).
The insert tool basically reads records from a file and then inserts records to the DB in blocks and commits.
We started testing with one table and observed slow insert rates straightaway. **But when the table is made a no inmemory table the insert rates were fine.**
**The table has 90 columns, 1 trigger, 15 indexes.**
The test preparation and results are given below.
**Preparation**
_______________
1) Create the table, trigger, indexes.
2) Make table in-memory using "alter table test_table inmemory priority critical"
**Results**
______________
Without Inmemory option (~7000 recs/sec)
Avg time to read 1 record = [0.0241493] ms
Avg time to insert 1 record = [0.141788] ms
Avg time to insert 1 block of 500 number of rows = [70.894] ms
Avg time to commit 2 blocks(500 rows per block) = [3.888] ms
Total time for 2000 blocks of inserts = [141.788] s, at [7052.78] recs/s
Total time for 1000 number of commits = [3.888] s
Total time for 2000 blocks of inserts + 1000 number of commits = [145.676] s
Total time to read 1000000 number of records from file = [24.1493] s
Total time to read 1000000 number of records + 2000 blocks of inserts + 1000 number of commits = [169.825] s
With Inmemory option (~200 recs/sec)
Avg time to read 1 record = [0.0251651] ms
Avg time to insert 1 record = [4.62541] ms
Avg time to insert 1 block of 500 number of rows = [2312.7] ms
Avg time to commit 2 blocks(500 rows per block) = [3.32] ms
Total time for 200 blocks of inserts = [462.541] s, at [216.197] recs/s
Total time for 100 number of commits = [0.332] s
Total time for 200 blocks of inserts + 100 number of commits = [462.873] s
Total time to read 100000 number of records from file = [2.51651] s
Total time to read 100000 number of records + 200 blocks of inserts + 100 number of commits = [465.39] s
The memory parameters of the DB are given below.
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 30G
sga_target big integer 30G
unified_audit_sga_queue_size integer 1048576
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 8
inmemory_query string ENABLE
inmemory_size big integer 10G
inmemory_trickle_repopulate_ integer 1
servers_percent
optimizer_inmemory_aware boolean TRUE
buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer big integer 1048552K
use_indirect_data_buffers boolean FALSE
memory_max_target big integer 0
memory_target big integer 0
optimizer_inmemory_aware boolean TRUE
pga_aggregate_limit big integer 8G
pga_aggregate_target big integer 4G
We are also tried the following, but the results were the same.
1) Stop one instance on the RAC (2 node RAC)
2) Change the inmemory priority to "high" then "low".
Hope someone can point me in the right direction.
Asked by Ahamed Fazlul Wahab
(31 rep)
Aug 27, 2015, 08:49 AM
Last activity: May 7, 2020, 08:52 AM
Last activity: May 7, 2020, 08:52 AM