Delta. This is used to make new data immediately available for reads. Once
the Delta is full, it is flushed to object storage as a SlateDB SST. While
the Delta implementation is domain-specific (unique to each database), the
management mechanism is common across all databases, ensuring that the ingest
semantics are correct and well tested.
LSM Tree maintence tasks such as comapction and garbage collection are handled
asynchronously and can run on a separate runtime (or even on a different
machine). While each database can implement its own comapction strategy, we
delegate the compaction execution to SlateDB.See Compaction in SlateDB.
Flexible Durability Guarantees
The common write coordination mechanism allows users to control the required durability before acknowledging the write. This allows users to trade off durability for write latency. This tradeoff is fundamental to the design of object-store native systems. At one end of the spectrum, writes can be acknowledged as soon as the data is buffered in the in-memory Delta. This offers the lowest latency but means that data written since the last WAL flush is at risk if the process crashes. At the other end, writes can wait until the WAL is flushed to object storage before being acknowledged, providing full durability at the cost of higher write latency.| Durability Level | Ack Condition | Latency | Data at Risk |
|---|---|---|---|
| Buffered | Data written to in-memory Delta | Lowest | Writes since last WAL flush |
| WAL-durable | WAL flushed to object storage | Higher | None (survives process crash) |