How to exchange data between instances of the same service in Consul? - microservices

I'm trying a combination of Spring cloud and Consul and I wonder if there is way to exchange data and state between instance the same of a microservice.
For example, I have AuthenticationService1 (AS1) and AuthenticationService2 (AS2). When a user comes to AS1, he logs in, receives a token and next time he comes it's only verified. But at this moment AS2 is not aware of the state of AS1.
I saw ideas about using a database table where information about user sessions is stored, but maybe there is an easier way for AS1 to share its state with AS2 or to send a message about log in?

Consul is a service management (discovery, config, ...), not a cache / pub-sub system.
You may want to use a shared cache behind the scene for your use case. You AS1 service authenticate a user, then put in the cache the token. AS2 can retrieve the token in the cache. For that, you can use application like
redis
hazelcast
infinispan
... other stuff like store data in a DB ...
You can also use a pub-sub system and a local cache for each ASx, but you can have issue when on AS restart (cache lost). So from my point of view, shared cache is better.

Related

Microservice failure Scenario

I am working on Microservice architecture. One of my service is exposed to source system which is used to post the data. This microservice published the data to redis. I am using redis pub/sub. Which is further consumed by couple of microservices.
Now if the other microservice is down and not able to process the data from redis pub/sub than I have to retry with the published data when microservice comes up. Source can not push the data again. As source can not repush the data and manual intervention is not possible so I tohught of 3 approaches.
Additionally Using redis data for storing and retrieving.
Using database for storing before publishing. I have many source and target microservices which use redis pub/sub. Now If I use this approach everytime i have to insert the request in DB first than its response status. Now I have to use shared database, this approach itself adding couple of more exception handling cases and doesnt look very efficient to me.
Use kafka inplace if redis pub/sub. As traffic is low so I used Redis pub/sub and not feasible to change.
In both of the above cases, I have to use scheduler and I have a duration before which I have to retry else subsequent request will fail.
Is there any other way to handle above cases.
For the point 2,
- Store the data in DB.
- Create a daemon process which will process the data from the table.
- This Daemon process can be configured well as per our needs.
- Daemon process will poll the DB and publish the data, if any. Also, it will delete the data once published.
Not in micro service architecture, But I have seen this approach working efficiently while communicating 3rd party services.
At the very outset, as you mentioned, we do indeed seem to have only three possibilities
This is one of those situations where you want to get a handshake from the service after pushing and after processing. In order to accomplish the same, using a middleware queuing system would be a right shot.
Although a bit more complex to accomplish, what you can do is use Kafka for streaming this. Configuring producer and consumer groups properly can help you do the job smoothly.
Using a DB to store would be a overkill, considering the situation where you "this data is to be processed and to be persisted"
BUT, alternatively, storing data to Redis and reading it in a cron-job/scheduled job would make your job much simpler. Once the job is run successfully, you may remove the data from cache and thus save Redis Memory.
If you can comment further more on the architecture and the implementation, I can go ahead and update my answer accordingly. :)

Microservice State Synchronization

We are working on an application that has a WebSocket connection to every client. For high availability and load balancing purposes, we would like to scale the receiving micro service. As the WebSocket connection is used to propagate the state of a client to every other client it is important to synchronize the current state of a client with all other instances of the receiving micro service. It is also important that the state has to be reset when a client disconnects.
To give you some specs:
We are using docker swarm
Its a NodeJS Backend and an Angular 9 Frontend
We have looked into multiple ideas, for example:
Redis Cache (The state would not be deleted if the instance fails.)
Queues/Topics (This would mean every instance has to keep track of the current state of all clients.)
WebSockets between instances (This looks promising but is not really scalable.)
What is the best practice to sync the state of a micro service between multiple instances while making sure that there are no inconsistencies? How are you solving this issue? Are we missing something obvious? Any tips and tricks?
We appreciate any suggestions.
This might not be 100% what you want to hear, but generally people advise that all microservices should be stateless.
An overall application, of course, has state, and databases, persistent event streams or key-value caches (e.g. Redis) are excellent ways of persisting this. Ideally this is bounded per service though, otherwise you risk end up writing a distributed monolith.
Hard to say in your particular case, but perhaps rethink how state is stored conceptually, and make that more explicit - determining what is cache (for performance) and what is genuine state that should be persisted externally (e.g. to Redis & a database), that allows many service instances to use instantly, thus making sure they can are truly disposable processes.

Default failure/recovery behavior for Gemfire Server/Client Architecture

For gemfire cache, we are using the client/server architecture in 3 different geographic regions with 3 different locators.
Cache Server
Each region would have 2 separate cache server, potentially one
primary and one secondary
The cache servers are peer-to-peer connection
The data-policy on the cache servers is replicate
No region persistence is enabled
Cache Client
No persistence is enabled
No durable queues/subscriptions are set up
What would the default behaviors of the following scenarios:
All cache servers in one geo-region crashes, what happens to the data in the cache clients when the cache servers restart? Does the behavior differ for cache clients with proxy or caching-proxy client cache regions?
All cache clients in one geo-region crashes. Although we don't have durable queues/subscriptions set up, for this scenario, let's assume we do. What happens to the data in the cache clients when they restart? Does the behavior differ for cache clients with proxy or caching-proxy client cache regions?
All cache servers and cache clients in one geo-region crashes, what happens to the data in the cache servers and cache clients when they start up? Does the behavior differ for cache clients with proxy or caching-proxy client cache regions?
Thanks in advance!
Ok, so based on how I am interpreting your configuration/setup and your questions, this is how I would answer them currently.
Also note, I am assuming you have NOT configured WAN between your separate clusters residing in different "geographic regions". However, some of the questions would not matter if WAN was configured or not.
Regarding your first bullet...
what happens to the data in the cache clients when the cache servers restart?
Nothing.
If the cache client were also storing data "locally" (e.g. CACHING_PROXY), then the data will remain intact.
A cache client can also have local-only Regions only available to the cache client, i.e. there is no matching (by "name") Region in the server cluster. This is determined by 1 of the "local" ClientRegionShortcuts (e.g. ClientRegionShortcut.LOCAL, which corresponds to DataPolicy.NORMAL). Definitely, nothing happens to the data in these type of client Regions if the servers in the cluster go down.
If your client Regions are PROXIES, then your client is NOT storing any data locally, at least for those Regions that are configured as PROXIES (i.e. ClientRegionShortcut.PROXY, which corresponds to DataPolicy.EMPTY).
So...
Does the behavior differ for cache clients with proxy or caching-proxy client cache regions?
See above, but essentially, your "PROXY" based client Regions will no longer be able to "communicate" with the server.
For PROXY, all Region operations (gets, puts, etc) will fail, with an Exception of some kind.
For CACHING_PROXY, a Region.get should succeed if the data is available locally. However, if the data is not available, the client Region will send the request to the server Region, which of course will fail. If you are performing a Region.put, then that will fail sense the data cannot be sent to the server.
Regarding your second bullet...
What happens to the data in the cache clients when they restart?
Depends on your "Interests Registration (Result) Policy" (i.e. InterestResultPolicy) when the client registers interests for the events (keys/values) in the server Region, particularly when the client comes back online. The interests "expression" (either particular keys, or "ALL_KEYS" or a regex) determines what the client Region will receive on initialization. It is possible not to receive anything.
Durability (the durable flag in `Region.registerInterest(..).) of client "subscription queues" only determines whether the server will store events for the client when the client is not connected so that the client can receive what it missed when it was offline.
Note, an alternative to "register interests" is CQs.
See here and here for more details.
As for...
Does the behavior differ for cache clients with proxy or caching-proxy client cache regions?
Not that I know of. It all depends on your interests registration and/or CQs.
Finally, regarding your last bullet...
All cache servers and cache clients in one geo-region crashes, what happens to the data in the cache servers and cache clients when they start up?
There will be no data if you do not enable persistence. GemFire is an "In-Memory" Data Grid, and as such, it keeps your data in memory only, unless you arrange for storing your data externally, either by persistence or writing a CacheWriter to store the data in an external data store (e.g. RDBMS).
Does the behavior differ for cache clients with proxy or caching-proxy client cache regions?
Not in this case.
Hope this helps!
-John

Data replication in Micro Services: restoring database backup

I am currently working with a legacy system that consists of several services which (among others) communicate through some kind of Enterprise Service Bus (ESB) to synchronize data.
I would like to gradually work this system towards the direction of micro services architecture. I am planning to reduce the dependency on ESB and use more of message broker like RabbitMQ or Kafka. Due to some resource/existing technology limitation, I don't think I will be able to completely avoid data replication between services even though I should be able to clearly define a single service as the data owner.
What I am wondering now, how can I safely do a database backup restore for a single service when necessary? Doing so will cause the service to be out of sync with other services that hold the replicated data. Any experience/suggestion regarding this?
Have your primary database publish events every time a database mutation occurs, and let the replicated services subscribe to this event and apply the same mutation on their replicated data.
You already use a message broker, so you can leverage your existing stack for broadcasting the events. By having replication done through events, a restore being applied to the primary database will be propagated to all other services.
Depending on the scale of the backup, there will be a short period where the data on the other services will be stale. This might or might not be acceptable for your use case. Think of the staleness as some sort of eventual consistency model.

Queue an async web request in Spring with credentials

I'm relatively new to Spring, and trying to queue up a set of web reqeusts on the server (in order to warm memcached). It's unclear to me how I can transfer on the current request's credentials to be used in the future web request I'm putting in the queue. I've seen a handful of scheduling solutions (TaskExecutor, ApplicationEventMultitasker, etc) but was unclear if/how they handle credentials, as that seems to be the most complicated portion of this task.
It's not possible directly. Security credentials are stored in ThreadLocal which means once the request is forwarded to another thread, credentials are lost. All you can do (which might actually be beneficial to your design) is to pass credentials directly, by wrapping them inside Callable/Runnable or whatever mechanism you use.

Resources