Just imported multiple flows into a managed production environment and now they are not being triggered (JotForm triggers). All connections are correct. Anyone have any thoughts?
Related
I have a production Laravel website that uses Beanstalk as a queue driver.
Now, I've been asked to make a staging website on the same server, with all the same functionality of the production website.
I am worried about the queues and scheduled tasks. From what I see there is a single beanstalkd process on the server. If I start adding things to the queue from the staging server, then I am worried that the scheduled tasks from the production server pick that up and perform the queued actions (some of which might be very tricky, like billing users).
The staging server needs to have the real database from production in order to make sense, including real member data.
How do I set up the staging Laravel application to not collide with production in this regard, but have an identical database?
You either have two connections setup with different default tubes, and based on ENV you can send messages to different tubes.
Or you have one single connection, but you specify a different tube. This way you have one set of tubes for live and another one for dev.
see some guidance here:
https://laracasts.com/discuss/channels/general-discussion/queue-with-two-tubes
and:
https://fideloper.com/ubuntu-beanstalkd-and-laravel4
The issue is I cant find any documentation on changing a managed(Autoscaling) group into an un-managed instance group with 0 servers group. I've looked at pythons google.cloud and googleapiclient without any luck. They both show ways of managing each individually but not changing it. service.instanceGroupManagers().resize also no go.
Also https://cloud.google.com/sdk/gcloud/reference/compute/instance-groups/
also treats them individually.
I know they support this but I can't figure out how to do this without the gui.
Maybe someone has a better way of doing this. The idea is having a load balancer with a maintenance splash page in it with a RPS of 0 so it get no traffic. When we want the sites to go down for an update we drain all the active connection with the built drain feature when a server is being deleted. We do this by setting the instance group to autoscale no (Unmanaged) and 0 servers.
If you’re using a managed instance group, & all of the images are the same, the below options are available & much simpler.
It does not seem possible to change from a managed instance group to unmanaged in any way, so, I cannot provide steps to doing this through automation.
Best to use a rolling update or canary deployments. You can also use opportunistic or proactive update. These methods & how to use them (gcloud commands & API examples included) are documented here.
Rolling Update: Replace x instances at a time, i.e. imagine 3 instances, the first instance will go down & be updated, once it is finished the second will go down to be update, once finished lastly, the third will be updated. If there are 50 instances you can specify 10 at a time be updated, etc.
Canary Update: Imagine you want to test your new application. Only x/y (i.e. 1 of 3) instances will be updated. So some users will use the new application while some use the old. This allows you to test the new application in production without affecting all instances. If the new version is running smoothly you can roll forward the update (rolling update) or you can roll back the update by removing the few instance(s) running the new version.
Proactive update: Instances are simply recreated with the new version.
Opportunistic: If proactive updates are too disruptive, opportunistic updates will wait for the autoscaler or some other event that would restart or recreate the instance anyway to then also create the instance with the new template.
Hope this helps.
If a worker role or for that matter web roles are continuously serving both long/short running requests. How does continuous delivery work in this case? Obviously pushing a new release in the cloud will abort current active sessions on the servers. What should be the strategy to handle this situation?
Cloud Services have production and staging slots, so you can change it whenever you want. Continuous D or I can be implemented by using Visual Studio Team Services, and i would recommend it - we use that. As you say, it demands to decide when you should switch production and staging slots (for example, we did that when the user load was very low, in our case it was a night, but it can be different in your case). Slots swapping is very fast process and it is (as far as i know) the process of changing settings behind load balancers not physical deployment.
https://azure.microsoft.com/en-us/documentation/articles/cloud-services-continuous-delivery-use-vso/#step6
UPD - i remember testing that, and my experience was that incoming connections were stable (for example, RDP) and outgoing are not. So, i can not guarantee that existing connections will be ended gracefully, but from my experience there were no issues.
Using the WebSphere Integrated Solutions Console, a large (18,400 file) web application is updated by specifying a war file name and going through the update screens and finally saving the configuration. The Solutions Console web UI spins a while, then it returns, at which point the user is able to start the web application.
If the application is started after the "successful update", it fails because the files that are the web application have not been exploded out to the deployment directory yet.
Experimentation indicates that it takes on the order of 12 minutes for the files to appear!
One more bit of background that may be significant: There are 19 application servers on this one WebSphere instance. WebSphere insists that there be a lot of chatter between them, even though they don't need anything from each other. I wondered if this might be slowing things down when it comes to deployment. Or if there's some timer in the bowels of WebSphere that is just set wrong (usual disclaimers apply...I'm just showing up and finding this situation...I didn't configure this installation).
Additional Information:
This is a Network Deployment configuration, and it's all on one physical host.
* ND 6.1.0.23
Is this a standalone or a ND set up? I am guessing it is ND set up considering you have stated that they are 19 app servers. The nodes should be synchronized with the deployment manager so that the updated files are available to the respective nodes.
After you update and save the changes, try and synchronize the nodes with the dmgr (or alternatively as part of the update process, click on review and the check the box which says synchronize nodes) and this would distribute the changes to the various nodes.
The default interval, i believe is 1 minute.
12 minutes certainly sounds a lot. Is there any possibility of network being an issue here?
HTH
Manglu
I have two instances of Oracle Application Server (OAS) clustered together and replicating sessions. Whenever I terminate one of the instances by killing the process, the other instance picks up and contains the session. Everything works as expected. If I gracefully shutdown one instance (using opmn stopall) of OAS, HttpSessionDestroyedEvent events are fired off and information is getting deleted, thus causing the application to not fail over gracefully. This is my first experience with a clustered environment and I am curious if this is common. I know and expect that the HttpSessionDestroyedEvent events are fired off in a non clustered environment when the server instance is stopped, but it just doesn't seem correct here. How would one perform any kind of maintenance on one server? I am using the Spring Framework which is where the HttpSessionDestroyedEvent event comes from.
It seems that this is a common problem with clustering and web servers. Basically when a single node belonging to a cluster is gracefully shutdown, that node will fire off session destroyed events for all of the sessions that belong to that node, even if more nodes are up and running in the cluster. Here are a few more links that describe the same problem I am having.
Tomcat Issues
JBoss Issues
A workaround is to load a properties file (see JBoss link) that contains a shutdown flag anywhere that you listen for a session destroyed event. One drawback to this is that the system admin has to remember to update the properties file before and after restart.