Standalone CQ5 publish instances without cluster - cluster-computing

I am looking for the best solution for having multiple publish instances. I tried shared nothing and shared datastore configurations.
Is there any advantages or disadvantages of having 2 or more publish instances without cluster setup? In such a configuration how can I start a new publish instance? I mean how I should replicate the data from the author when I start a new publish instance (probably from backup), what are the best practices for this. How to solve reverse replication issues, so while I starting a new instance, other publish instance might get some new user generated data which must be replicated to the new publish instance too. What is your experience in this topic?
Thanks in advance!

Multiple publish instances allow you to use a load balancer & increase the amount of concurrent users your site can serve.
Replcation of content can be done in the standard way, using CRX Package replcation or the Activation tree, once you've set up a replication agent per instance.
You shouldn't run into too much difficulty with content generated while the instance is being set up, as once the instance is up & running, you can just activate it in the normal way. (If you think it will be an issue, you could probably set-up the replication agent first, so that new content is queued while the instance is being set-up, though I couldn't see why you would need to do this).

Related

Apache Nifi - About .Nar File Changes and Nifi Restarting

I want suggestions for my application:
I have Multitenancy in Nifi. For each Process group, I have different Tenants/Users.
For any changes in one Tenant/user like in his custom processor(.nar file will create), we need to copy-paste that .nar file into lib folder and again restart the nifi. But due to this full Nifi server has restarted because of that Each Tenant/User and Processes group get restarted.
So, Please give Some Suggestions So that we can restart only one Tenant/user or process group Or Without Restart Nifi .nar file will reflect?
NiFi does not currently have the kind of warm restart option that you describe, however a lot of the base functionality needed to support it is in the code base and the concept is on the community roadmap.
Some options that might help you today:
Consider segregating the tenants with a high rate of code change into separate development environments. You could possibly leverage the Docker builds to provide flexibility and easy automation. You could then promote the end-of-day versions of your Nars into the 'Production' cluster each night, hopefully without disturbing users.
Consider utilising the NiFi Site-to-Site capability to have linked NiFi environments instead of a single shared one. Processors that change regularly could be called out to and updated in their own schedule
Consider why you are changing processor code so regularly, there may be a better approach than hard coding logic and parameters into the processors - the variable registry, various controller services, flow registry, etc. all provide a very rich featureset.

Wakanda Shared Storage equivalent in v10

Due to compatibility issues, my project must remain in Wakanda 10. What is the best technique to keep a variable consistent across multiple server threads? For instance, if I want to make an object literal that can be modified, how can I best ensure the data is updated across all Wakanda Server threads?
For now, I am going to write the value to the datastore as a work around. Any better suggestions would be appreciated. Would a shared worker help me?
Web workers can't access global variables. you can handle communication between web workers via Message Passing.
To make an object available for all workers you can :
Pass the object from a worker to another one using postMessage;
Store the object in the database.
I believe the the best way is to store your variable in the datastore. it's more simple especially if you have a lot of workers.
Here are some related discussions:
Shared worker do share variables
Sharing variables between web workers? [global variables?]

how to update local memory cache in all server instances

I have a web server cluster that contains many running web server instances. each instance cache some configurations in its local memory, the original configurations are stored in Database.
these configurations are used for every request, so the cache may necessary for performance reason.
I want to provide an admin page, in which, the administrator can change the configurations. how do I update all the cache in every server instance?
now I have two solutions for this:
set an expire time for the cache.
when administrator update the configuration, notify each instance via some pub/sub mechanism(e.g. use redis).
for solution 1, the drawback is the changes can not take effect immediately.
for solution 2, I'm wondering, if the pub/sub will have impact on the performance of the web server.
which one is better? or is there any common solution for this problem?
Another drawback of option 1 is that you'll periodically hit your database unnecessarily.
If you're already using Redis then option 2 is a good solution. I've used it successfully and can't imagine how there could be a performance impact just because you're using pubsub.
Another option is to create a cache invalidation URL on each website, e.g. /admin/cache-reset/, and have your administration tool call the cache-reset URL on each individual server. The drawback of this solution is that you need to maintain a list of servers. If you're not already using Redis it could just be the simple/practical/low-tech solution that you're looking for.

What are my alternatives for managing RabbitMQ channel changes as a part of CD process

I am looking for alternatives for managing my RabbitMQ setup, same as i manage my RDBMS with liquibase/flyway or mongo with mongeez.
After looking around a bit I havent found any resources on it as much (Which gets me thinking on how companies actually do it).
I read thread that talked about each component creating the channels that it needs to its either there or it will be created in runtime when needed.
Other then that i haven't found any mention of a request like mine, am i looking at this the wrong way?
We manage it the following way. It's not a clean straight forward solution, but it works.
Installation, update and base-configuration of RabbitMQ is done via an ansible role.
Creation, update and deletion of virtual hosts, users and access permissions is done via a second ansible role
Management, i.e. create, update and delete of queues and exchanges is done from within the application
With this setup we were able to provide a multi tenant configuration and efficiently manage several installations in several stages.

Tomcat Session Replication

I am trying to develope an application with tomcat running in several computers of same LAN trying representing several nodes and each of them runs an application with a single shared session(Ex. shared document editor such as google docs.). in my understanding so far I need a single shared session and several users need to update the doc symultaneously and each others updates are reflected on each others we interfaces almost imidietly. Can I acheve this with with tomcat's clustering feature. http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html#Configuration_Example or is this just a faluir recovery system.
Tomcat's clustering feature is meant for failover - if one node fails, user can carry on working while being transparently sent to another node without a need to log in again.
What you are trying to achieve is a totally different scenario and I think using session for this is just wrong. If you go back to Google Doc example, how would you achieve granting (revoking?) document access to another user? What do you do when session times out - create the document again? Also, how would you define which users would be able to access selected documents?
You would need to persist this data somewhere (DB?) anyway so implement or reuse some existing ACL system where you could share information about users and document permissions.

Resources