(Background: thinking about caching and syncing order key value stores)
Now is the more challenging part — keeping track of what data is in the cache so we know if we can fulfill the request locally or not.
Example, suppose we've fetched this result and stored it in the cache:
list({gte: a, lte: b})
fully covered case:
We know what the data range is. We can add this data to the cache and keep track of the range. Then when we get a request within that range, like {gte: a, lte: ab}, then we know we can fulfill it.
prefix case: We may get a request that's covered at the beginning but not at the end.
list({ gte:a })
is covered only at the beginning. In the case where we're render these items in a list, it may be good enough to render while fetching the rest of the data because the rest of the data will populate down the screen without causing jank.
prefix limit-covered case:
If we have a request like list({ gte:a, limit: 10 })
which is upon fetching data, the last item may be ab
which is less than b
. In this case, we have a fully covered request.
suffix cases:
All the same applies for suffixes as well. list({ lte: b })
is covered at the end only.
reverse cases:
list({ lte: b, reverse: true })
would be considered a prefix case.
Example, when we fetch data without two-sided bounds, then we don't know the data range up-front. Instead, we need to wait for the results before we store the data range.
list({limit: 10}) -> [a, ..., b]
Next, we need to think about how cache eviction works.
When we request data from the cache, we also subscribe to it. But these ranges aren't the same:
the subscription is the the args, and the cached range has to do with both the args and the
resulting data.
Lets walk through an example.
cached: