Sample Header Ad - 728x90

With Cassandra running into issues with too many tombstones, does BigTable have similar anti-patterns?

0 votes
1 answer
203 views
I've been using Cassandra and have run into various problems with Tombstones¹. These would cause detrimental issues when I would later run a query. For example, if I overwrite the same few rows over and over again, even though I still have 5 **valid** rows in my database, it can take minutes to read them because the Cassandra system would have to read over all the Tombstones to finally reach the useful data. What I'm wondering is whether Google Bigtable has a similar anti-pattern? In my situation, I will be using the Bigtable to do many writes, up to 10,000 a minute, and once an hour, another app reads the data to update its caches to the latest. Many of the writes are actually updates to existing rows (same key). ¹ _Tombstones: what's left behind when deleting or updating a row in a Cassandra database. These get removed from the database when a compression of the table occurs, assuming it is given time to run said compression._
Asked by Alexis Wilke (135 rep)
Feb 7, 2022, 11:17 PM
Last activity: Aug 2, 2024, 05:37 AM