Single database and multiple app nodes in jelastic-virtuozzo - jelastic

I need to understand some steps so I try to ask here. I have configured a lemp node with Wordpress installation.
I then decided to add a horizontal node at the APP level and a Redis node for the object cache (as well as obviously a load balancer which is necessarily necessary) and I implemented an addon for file synchronization for the / ROOT / folder. wp-content (I followed the configuration suggested by the guides). I state that I have NOT distributed a preconfigured cluster.
So I would have 1 LoadB. + 2 Apps + 1 Redis database at the moment.
I found that the database in this case is only one for both apps (and it suits me), so I wonder: when the database starts working at full capacity, I can run into problems when you activate or deactivate the second node? What precautions should you have in this case so that everything runs smoothly?
Does the addition of a second (third, fourth etc) node on a horizontal level (app node) have the function of increasing the cloudlets available in case of need or am I not understanding anything?
Thanks to those who will answer

Related

How to properly scale Jelastic app servers horizontally

I have several stateless app servers packed into Docker containers. I have a lot of load on top of them and I want to horizontally scale this setup. My setup doesn't include load balancer nodes.
What I've done is simply increased nodes count — so far so good.
From my understanding Jelastic have some internal load balancer which decides to which node it should pass incoming request, e.g.:
user -> jelastic.my-provider.com -> one of 10 of app nodes created.
But I've noticed that lot of my nodes (especially last ones) are not receiving any requests, and just idling, while first nodes receive lion share of incoming requests (I have lot of them!). This looks strange for me, because I thought that internal load balancer doing some king of round-robin distribution.
How to setup round-robin balancing properly? I came to the conclusion that I have to create another environment with nginx/haproxy and manually add all my 10 nodes to list of downstream servers.
Edit: I've setup separate HAProxy instance and manually added all my nodes to haproxy.cfg and it worked like a charm. But the question is still open since I want to achieve automatic/by schedule horizontal scaling.
Edit2: I use Jelastic v5.3 Cerebro. I use custom Docker images (btw, I have something like ~20 envs which all about of custom images except of databases).
My topology for this specific case is pretty simple — single Docker environment with app server configured and scaled to 10 nodes. I don't use public IP.
Edit3: I don't need sticky sessions at all. All my requests came from another service deployed to jelastic (1 node).

File sync between n web servers in cluster

There are n nodes in a web cluster. Files may be uploaded to any node and then must be distributed to every other node. This distribution does not have to happen in a transaction (in fact it must not, distributed transactions don't scale) and some latency is acceptable, although must be minimal. Conflicts can be resolved arbitrarily (typically last write wins) provided that the resolution is also distributed to all nodes so that eventually all nodes have the same set of files. Nodes can be added and removed dynamically without having to reconfigure existing nodes. There must be no single point of failure and no additional boxes required to solve this (such as RabbitMQ)
I am thinking along the lines of using consul.io for dynamic configuration so that each node can refer to consul to determine what other nodes are available and writing a daemon (Golang) that monitors the relevant folders and communicates with other nodes using ZeroMQ.
Feels like I would be re-inventing the wheel though. This is a common problem and I expect there are solutions available already that I don't know about? Or perhaps my approach is wrong and there is another way to solve this?
Yes, there has been some stuff going on with distributed synchronization lately:
You could use syncthing (open source) or BitTorrent Sync.
Syncthing is node-based, i.e. you add nodes to a cluster and choose which folders to synchronize.
BTSync is folder-based, i.e. you obtain a "secret" for a folder and can synchronize with everyone in the swarm for that folder.
From my experience, BTSync has a better discovery and connectivity, but the whole synchronization process is closed source and nobody really knows what happens. Syncthing is written in go, but sometimes has trouble discovering peers.
Both syncthing and BTSync use LAN discovery via broadcast and a tracker for discovery, AFAIK.
EDIT: Or, if you're really cool, use IPFS to host the latest version, IPNS to "name" that and mount the IPNS on the servers. You can set the IPFS bootstrap list to some of your servers, which would even make you independent of external trackers. :)

How to Auto-start selenium nodes under Windows

I am trying to automate the startup of my Selenium Grid.
I have the Hub registered as a service, so that starts when the machine starts, but
the literature tells me I can't do the same with the node, because it won't be in a User context, and so I would not be able to get screenshots etc.
I've seen vague hints that you can add something to the registry to start a program, but I'm not really convinced thats what I want.
IT pulls down the servers for upgrades at intervals, and sessions are set to time out after X amount of inactivity, so its a tedious and silly process to open remote desktops to all 6 nodes, in order to log in, then start the process every time.
How do you best manage this?
- Configure the machines to auto-login, and place startSeleniumNode.bat in that users start-folder?
- Add some kind of commandline entry in the jenkins build script that launches the test, to call each of the 6 nodes in turn to start the selenium node (and how would you do that?)
Take a look at AlwaysUp - it allows you to run almost any application as a Windows service including Selenium Grid hubs and nodes.
I've previously created a fairly large Grid infrastructure using AlwaysUp for node management. It's very useful for starting up Grid on boot and lets you specify a user account to run as, schedule restarts at regular intervals and a lot more.

100% uptime for web application

Our requirement from our next web app is that we will be able to deploy a new version of the web app without a downtime.
how is it possible to achieve such task?
does it mean we need to run 2 different servers (tomcats) ? and redirect users to each one when needed?
are there tools that are doing this specific task? in what category these tools in?
Thanks
Just use Tomcat's parallel deployment feature. It is available from Tomcat 7 onwards.
Don't forget, 100% availability is impossible - it may happen for a certain period, but no one can guarantee it, no matter what setup you have.
But since you're looking for a smooth change from one version to another, then the best you can do is update one node and switch nodes then. Of course, since you likely have sessions which shouldn't disconnected, you'll need to make sure that an instance (e.g. load balancer) directs all new requests to the new node, whereas old session requests stay on the old node until no one uses it anymore, after which you can upgrade the second node and finally, balance load again to both nodes.

How to prevent WebSphere from starting before files from an application update have been unpacked

Using the WebSphere Integrated Solutions Console, a large (18,400 file) web application is updated by specifying a war file name and going through the update screens and finally saving the configuration. The Solutions Console web UI spins a while, then it returns, at which point the user is able to start the web application.
If the application is started after the "successful update", it fails because the files that are the web application have not been exploded out to the deployment directory yet.
Experimentation indicates that it takes on the order of 12 minutes for the files to appear!
One more bit of background that may be significant: There are 19 application servers on this one WebSphere instance. WebSphere insists that there be a lot of chatter between them, even though they don't need anything from each other. I wondered if this might be slowing things down when it comes to deployment. Or if there's some timer in the bowels of WebSphere that is just set wrong (usual disclaimers apply...I'm just showing up and finding this situation...I didn't configure this installation).
Additional Information:
This is a Network Deployment configuration, and it's all on one physical host.
* ND 6.1.0.23
Is this a standalone or a ND set up? I am guessing it is ND set up considering you have stated that they are 19 app servers. The nodes should be synchronized with the deployment manager so that the updated files are available to the respective nodes.
After you update and save the changes, try and synchronize the nodes with the dmgr (or alternatively as part of the update process, click on review and the check the box which says synchronize nodes) and this would distribute the changes to the various nodes.
The default interval, i believe is 1 minute.
12 minutes certainly sounds a lot. Is there any possibility of network being an issue here?
HTH
Manglu

Resources