Sample Header Ad - 728x90

Being hit with memory limit restrictions in GridDB despite increasing the SQL and general storeMemoryLimit

1 vote
1 answer
25 views
I have a table with ~1 million rows that is being attempted to be read by a java program via the TQL interface (ie. just using the normal Java API, no SQL). But when I try to read the container, I'm met with a memory error: ` Exception in thread "main" com.toshiba.mwcloud.gs.common.GSStatementException: [1043:CM_MEMORY_LIMIT_EXCEEDED] Memory limit exceeded (name=transactionWork.workerStack, requestedSize=67108880, totalSizeLimit=134217728, freeSizeLimit=1048576, totalSize=74448896, freeSize=0) (address=172.18.0.2:10001, partitionId=29) ` the obvious solution is to increase the storeMemoryLimit, and so I have increased it from the default 1024MB to double that at 2048MB. Though I have increased the memory limit, I am getting the same error being read from the Java program, as in the values of the totalSizeLimit etc remain the same, meaning that increasing the storeMemoryLimit further would not help in this case (though I have tried). I have also changed the compression method from NONE to the new ZSTD in the hopes that it could compress the dataset small enough to be read at once without hitting the memory limit but this has also failed. My only other idea is to switch from TQL to SQL and increasing the SQL memory limit but I'd rather not switch the entirety of my code around. Any ideas on getting this entire table read in one shot?
Asked by L. Connell (69 rep)
Jul 9, 2024, 03:03 PM
Last activity: Jul 15, 2024, 10:41 PM