I am mapping users to connections as described in the following link https://learn.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/mapping-users-to-connections so I can find which user's to send messages to.
I was wondering if there is any additional work required for this to work smoothly on multi node servers / load balancing. Im not experienced on the infrastructure side but I'm assuming if there are multi servers spun up, there would be multiple static hashmaps storing the mappings of users to connections - i.e., one for each server.
Would this mean users that have made a connection from their browser to node A will not be able to communicate to users who've connected to node B ?
If this is the case, how would we go about making this possible.
In that same link, just below the Introduction section, it discusses 4 different mapping methods:
The User ID Provider (SignalR 2)
In-memory storage, such as a dictionary
SignalR group for each user
Permanent, external storage, such as a database table or Azure table storage
And after that there is a table that show which of these works in different scenarios. One of those scenarios being "More than one server".
Since it is not mentioned, it depends on which mapping method you are following.
From there, you can check out "scaling out" on the same site you noted which has several methods you can follow depending on what suites your needs. This is where sending messages to clients regardless of which server they connect are handled.
Related
I have One Database with one domain. But my Database have 3 Websites available. I want my 2nd Website for publish in that Database. Is that possible ???
You might want to make sure that you're not violating the terms of service with the company who is hosting your database. Having many outside domains hitting an inside database may cause some undue stress on that server that the company is not counting on or eating up more bandwidth that is allotted for that machine.
In the same breath though, if you setup some type of data layered web service which you can connect to, then your many other domains are not directly hitting the database and do essentially the same thing, but in a more ordered fashion of predictable database calls. This may not be what you're looking for, but if setup correctly it could make developing against your database much easier.
Why?
For educational purposes. I think it would be really nice for my audience to actually "see" it work like that.
Setup
A dockerized Spring boot REST API (serving up customer information)
A dockerized Cassandra cluster consisting of three connected nodes, holding customer data with a replication factor of two.
Suggestions
Showing which IP address or container name served my request
Showing which IP address or container name held the data that was used to show my request.
If I were to run these nodes on three seperate physical machines, maybe which machine held my data?
Something else you have in mind that really shows the distributed capabilities of Cassandra
Can this be achieved in docker logs or something in Spring data Cassandra that I am not aware of?
I don't know about Spring Data, but in normal Java driver you can get execution information from ResultSet via getExecutionInfo, and call function getQueriedHost from it. If you're using default DCAware/TokenAware load balancing policy, then you reach at least one of the nodes that hold your data. The rest of information you can get via Metadata class from which you can get a list of token ranges owned by hosts, generate a token for your partition key, and lookup in the token ranges.
P.S. See Java driver documentation for more details.
I am now trying to design database for my micro service-oriented application in a distributed way. My application is related with management of universities. I have different universities say A, B, C. Each university have separate users for using their business data. Now I am planning to design separate databases for separate universities for storing their user data. So each university has their own database for their users and additional one database for managing their application tables. If I have 2 universities, Then I have 2 user details DB and other 2 DB for application tables.
Here my confusion is that, when I am searching for database design, I only see the approach of keeping one common database for storing all users (Here for one DB for all users of all universities). So every user is mixed within one database.
If I am following separate database for each university, Is possible to support distributed DB architecture pattern and micro service oriented standard? Or Do I need to keep one DB for all users?
How can I find out which method is appropriate for microservice / Distributed database design pattern?
Actually there could be multiple solutions and not one solution is best, the best solution is the one which is appropriate for your product's requirements.
I think it would be a better idea to go with separate databases for each of your client (university) to keep the data always isolated even if somethings wrong happens. Also with time, the database could go so huge that it could cause problems to configure/manage separate backups, cleanups for individual clients etc.
Now with separate databases there comes a challenge for managing distributed transactions across databases as you don't know which part is going to fail among many. To manage that, you may have to implement message/event driven mechanism across all your micro-services and ensure consistency.
Regarding message/event mechanism, here is a simple use case scenario, suppose there are two services "A" (user-registration) and "B" (email-service)
"A" registers a user temporarily and publishes an event of sending confirmation email.
The message goes to message broker
The message is received by "B".
The confirmation email is sent to the user.
The user confirms the email to "B"
The "B" publishes event of user confirmation to the broker
"A" receives the event of confirmation and the process is completed.
The above is the best case scenario, problems still can happen in between even with broker itself.
You have to go deep into it if you think you need this.
Some links that may help.
http://how-to-implement-a-microservice-event-driven-architecture-with-spring-cloud-stre
A Guide to Transactions Across Microservices
I don't think that this is a valid design, using a database per client which is a Multi-tenant architecture practice, and database per microservice is a microservice architecture practice. you are mixing things up.
if you will use microservice architecture you better design it as Bounded contexts and each Context has its own database to achieve microservices main rule Autonomy
We are building a multi tenant application which has restrictions on the regions/countries where the data is persisted.
The application is based on microsoft .Net microservice architecture but we have shared Domains, although we have separate DBs at very lower levels say for each city a separate DB. We cannot persist the data of one country in another country's data center. Hazelcast will be used as the distributed cache. I could not find any direct ways to configure data isolation for ex. like "Memory Regions" in apache ignite. Do we have "Memory Regions" in hazelcast?
I need to write behind the data from cache to respective Database. Can I segregate a part/partition of cache specific to a database instance?
Any help would be greatly appreciated. Thanks in advance.
I am not directly replying to your question. IMHO, from my understanding when you have a data stored across different clusters / nodes, there will still be a network call, despite you having some key formats so that the data is stored within the same Cluster / Node.
Based on my experience, you could easily setup a MemoryCache that comes as part of the System.Runtime.Caching to store the data in every node and then use Redis Pub-Sub or Azure Service bus as the back-bone for the pub-sub.
In that case,
any data that is updated in a cache is notified to all the other instances of the application via a ServiceBus / Redis message which is typically the key.
Upon receipt of the key, each application clears out its internal cache and then gets the data cached back on the next DB access.
This method is more commonly prevalent in Multi-Tenant Applications and also is fail-safe and light weight. The payloads / network transfers are less and each AppDomain has its internal memory used as a cache which does support different regions via different instances of MemoryCache.
Hope this helps if no direct response is available regarding HazelCast
Also, you may refer to this link for some details regarding the Hazelcast
I am trying to develope an application with tomcat running in several computers of same LAN trying representing several nodes and each of them runs an application with a single shared session(Ex. shared document editor such as google docs.). in my understanding so far I need a single shared session and several users need to update the doc symultaneously and each others updates are reflected on each others we interfaces almost imidietly. Can I acheve this with with tomcat's clustering feature. http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html#Configuration_Example or is this just a faluir recovery system.
Tomcat's clustering feature is meant for failover - if one node fails, user can carry on working while being transparently sent to another node without a need to log in again.
What you are trying to achieve is a totally different scenario and I think using session for this is just wrong. If you go back to Google Doc example, how would you achieve granting (revoking?) document access to another user? What do you do when session times out - create the document again? Also, how would you define which users would be able to access selected documents?
You would need to persist this data somewhere (DB?) anyway so implement or reuse some existing ACL system where you could share information about users and document permissions.