How to add storage-level caching between DynamoDB and Titan? - caching

I am using the Titan/DynamoDB library to use AWS DynamoDB as a backend for my Titan DB graphs. My app is very read-heavy and I noticed Titan is mostly executing query requests against DynamoDB. I am using transaction- and instance-local caches and indexes to reduce my DynamoDB read units and the overall latency. I would like to introduce a cache layer that is consistent for all my EC2 instances: A read/write-through cache between DynamoDB and my application to store query results, vertices, and edges.
I see two solutions to this:
Implicit caching done directly by the Titan/DynamoDB library. Classes like the ParallelScanner could be changed to read from AWS ElastiCache first. The change would have to be applied to read & write operations to ensure consistency.
Explicit caching done by the application before even invoking the Titan/Gremlin API.
The first option seems to be the more fine-grained, cross-cutting, and generic.
Does something like this already exist? Maybe for other storage backends?
Is there a reason why this does not exist already? Graph DB applications seem to be very read-intensive so cross-instance caching seems like a pretty significant feature to speedup queries.

First, ParallelScanner is not the only thing you would need to change. Most importantly, all the changes you need to make are in DynamoDBDelegate (that is the only class that makes low level DynamoDB API calls).
Regarding implicit caching, you could add a caching layer on top of DynamoDB. For example, you could implement a cache using API Gateway on top of DynamoDB, or you could use Elasticache. Either way, you need to figure out a way to invalidate Query/Scan pages. Inserting/deleting items will cause page boundaries to change so it requires some thought.
Explicit caching may be easier to do than implicit caching. The level of abstraction is higher, so based on your incoming writes it may be easier for you to decide at the application level whether a traversal that is cached needs to be invalidated. If you treat your graph application as another service, you could cache the results at the service level.
Something in between may also be possible (but requires some work). You could continue to use your vertex/database caches as provided by Titan, and use a low value for TTL that is consistent with how frequently you write columns. Or, you could take your caching approach a step further and do the following.
Enable DynamoDB Stream on edgestore.
Use a Lambda function to stream the edgestore updates to a Kinesis Stream.
Consume the Kinesis Stream with edgestore updates in the same JVM as the Gremlin Server on each of your Gremlin Server instances. You would need to instrument the database level cache in Titan to consume the Kinesis stream and invalidate the cached columns as appropriate, in each Titan instance.

Related

With CQRS Pattern how are you still limiting to one service per database

According to my understanding
We should only have one service connecting to a database
With CQRS you will be keeping two databases in sync, hypothetically using some “service” glueing them together
Doesn’t that now mean there’s a service which only purpose is to keep the two in sync, and another service to access the data.
Questions
Doesn’t that go against rule number above? Or does this pattern only apply when native replication is being used?
Also, other than being able to independently scale the replicated database for more frequent reads, does the process of keeping both in sync kind of take away from that? Either way we’re writing the same data to both in the end.
Ty!
We should only have one service connecting to a database
I would rephrase this to: each service should be accessible via that service's api. And all internals, like database, should be completely hidden. Hence, there should be no (logical) database sharing between services.
With CQRS you will be keeping two databases in sync, hypothetically using some “service” glueing them together
CQRS is a pattern for splitting how a service talks to a data layer. Typical example would be something like separating reads and writes; as those are fundamentally different. E.g. you do rights as commands via a queue and reads as exports via some stream.
CQRS is just an access pattern, using it (or not using it) does nothing for synchronization. If you do need a service to keep two other ones in sync, then you still should use services' api's instead of going into the data layer directly. And CQRS could be under those api's to optimize data processing.
The text from above might address your first question. As for the second one: keeping database incapsulated to a service does allow that database (and service) to be scaled as needed. So if you are using replication for reads, that would be a reasonable solutions (assuming you address async vs sync replication).
As for "writing data on both ends", I am actually confused what does that mean...

Does storing another service's data violate the Single Responsibility Principle of Microservice

Say I have a service that manages warehouses(that is not very frequently updated). I have a sales service that requires the list of stores( to search through and use as necessary). If I get the list of stores from the store service and save it( lets say in redis) inside my sales service but ensure that redis is updated if the list of stores changes. Would it violate the single responsibility principle of Microservice architecture?
No it does not, actually it is quite common approach in microservice architecture when service stores a copy of related data from another services and uses some mechanism to sync it (usually using some async communications via message broker).
Storing the copy of data does not transfer ownership of that data from service which manages it.
It is common and you have a microservice pattern (CQRS).
If you need some information from other services / microservices to join with your data, then you need to store that information.
Whenever you are making design decision whether always issue requests against the downstream system or use a local copy then you are basically making trade-off analysis between performance and data freshness.
If you always issue RPC calls then you prefer data freshness over performance
The frequency of how often do you need to issue RPC calls has direct impact on performance
If you utilize caching to gain performance then there is a chance to use stale data (depending on your business it might be okay or unacceptable)
Cache invalidation is a pretty tough problem domain so, it can cause headache
Caching one microservice's data does not violate data ownership because caching just reads the data, it does not delete or update existing ones. It is similar to have a single leader (master) - multiple followers setup or a read-write lock. Until there is only one place where data can be created, modified or deleted then data ownership is implemented in a right way.

When to use Redis and when to use DataLoader in a GraphQL server setup

I've been working on a GraphQL server for a while now and although I understand most of the aspects, I cannot seem to get a grasp on caching.
When it comes to caching, I see both DataLoader mentioned as well as Redis but it's not clear to me when I should use what and how I should use them.
I take it that DataLoader is used more on a field level to counter the n+1 problem? And I guess Redis is on a higher level then?
If anyone could shed some light on this, I would be most grateful.
Thank you.
DataLoader is primarily a means of batching requests to some data source. However, it does optionally utilize caching on a per request basis. This means, while executing the same GraphQL query, you only ever fetch a particular entity once. For example, we can call load(1) and load(2) concurrently and these will be batched into a single request to get two entities matching those ids. If another field calls load(1) later on while executing the same request, then that call will simply return the entity with ID 1 we fetched previously without making another request to our data source.
DataLoader's cache is specific to an individual request. Even if two requests are processed at the same time, they will not share a cache. DataLoader's cache does not have an expiration -- and it has no need to since the cache will be deleted once the request completes.
Redis is a key-value store that's used for caching, queues, PubSub and more. We can use it to provide response caching, which would let us effectively bypass the resolver for one or more fields and use the cached value instead (until it expires or is invalidated). We can use it as a cache layer between GraphQL and the database, API or other data source -- for example, this is what RESTDataSource does. We can use it as part of a PubSub implementation when implementing subscriptions.
DataLoader is a small library used to tackle a particular problem, namely generating too many requests to a data source. The alternative to using DataLoader is to fetch everything you need (based on the requested fields) at the root level and then letting the default resolver logic handle the rest. Redis is a key-value store that has a number of uses. Whether you need one or the other, or both, depends on your particular business case.

Amazon Web Services: Spark Streaming or Lambda

I am looking for some high level guidance on an architecture. I have a provider writing "transactions" to a Kinesis pipe (about 1MM/day). I need to pull those transactions off, one at a time, validating data, hitting other SOAP or Rest services for additional information, applying some business logic, and writing the results to S3.
One approach that has been proposed is use Spark job that runs forever, pulling data and processing it within the Spark environment. The benefits were enumerated as shareable cached data, availability of SQL, and in-house knowledge of Spark.
My thought was to have a series of Lambda functions that would process the data. As I understand it, I can have a Lambda watching the Kinesis pipe for new data. I want to run the pulled data through a bunch of small steps (lambdas), each one doing a single step in the process. This seems like an ideal use of Step Functions. With regards to caches, if any are needed, I thought that Redis on ElastiCache could be used.
Can this be done using a combination of Lambda and Step Functions (using lambdas)? If it can be done, is it the best approach? What other alternatives should I consider?
This can be achieved using a combination of Lambda and Step Functions. As you described, the lambda would monitor the stream and kick off a new execution of a state machine, passing the transaction data to it as an input. You can see more documentation around kinesis with lambda here: http://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html.
The state machine would then pass the data from one Lambda function to the next where the data will be processed and written to S3. You need to contact AWS for an increase on the default 2 per second StartExecution API limit to support 1MM/day.
Hope this helps!

Data-aware load balancing with embedded and distribted caches/datagrids

Sorry i'm a beginner in load balancing.
In distributed environments we tend more and more to send the treatment (map/reduce) to the data so that the result gets computed locally and then aggregated.
What i'd like to do apply for partionned/distributed data, not replicated.
Following the same kind of principle, i'd like to be able to send an user request on the server where the user data is cached.
When using an embedded cache or datagrid to get low response time, when the dataset is large, we tend to avoid replication and use distributed/partitionned caches.
The partitionning algorithm are generally hash-based and permits to have replicas to handle server failures.
So finally, a user data is generally hosted on something like 3 servers (1 primary copy and 2 replicas)
On a local cache misses, the caches are generally able to search for the entry on other cache peers.
This works fine but needs a network access.
I'd like to have a load balancing strategy that avoid this useless network call.
What i'd like to know: is it possible to have a load balancer that is aware of the partitionning mecanism of the cache so that it always forwards to one of the webservers having a local copy if the data we need?
For exemple, i have a request www.mywebsite.com/user=387
The load balancer will check the 387 userId and know that this user is stored in servers 1, 6 and 12. And thus he can roundrobin to one of them or other strategy.
If there's no generic solution, are there opensource or commercial, software or hardware load balancers that permits to define custom routing strategies?
How much extracting data of a request will slow down the load balancer? What's the cost of extracting an url parameter (like in my exemple with user=387) and following some rules to go to the right webserver, compared to a roundrobin strategy for exemple?
Is there an abstraction library on top of cache vendors so that we can retrieve easily the partitionning data and make it available to the load balancer?
Thanks!
Interesting question. I don't think there is a readily available solution for your requirements, but it would be pretty easy to build if your hashing criteria is relatively simple and depends only on the request (a URL parameter as in your example).
If I were building this, I would use Varnish (http://varnish-cache.org), but you could do the same in other reverse proxies.

Resources