I recently deployed a soa composite application on em (Enterprise Manager) but it took 1 hour to deploy (it got deployed successfully after an hour) which resulted in admin and other server to go in critical state.
I tried the same in lower environment but the deployment took less then a minute
Few points:
I double cross checked the configuration plan that was used with the jar, its correct.
The jar was tested in lower envmt before deploying to higher envmt.
What can be the reasons for long deployment.
Did the problem just emerge with this composite and all other composites deploy as expected?
Can you create a test composite - input a file and output via a mediator and does it deploy as expected? if so - then there is a problem with you original composite. If not then there's a problem with your install.
Related
This is a bit theoretical, but I'll try to explain my setup as much as I can:
1 server (instance) with a self-hosted gitlab
1 server (instance) for development
1 server (instance) for production
Let's say in my gitlab I have a ReactJs project and I configured my gitlab-ci.yml as following:
job deploy_dev Upon pushing to dev branch, the updates will be copied with rsync to /var/www/html/${CI_PROJECT_NAME} (As a deployment to dev server)
The runner that picks up the job deploy_dev is a shared runner installed on that same dev server that I deploy to and it picks up jobs with the tag reactjs
The question is:
If I want to deploy to production what is the best practice should I follow?
I managed to come up with a couple of options that I thought of but I don't know which one is the best practice (if any). Here is what I came up with:
Modify gitlab-ci.yml adding a job deploy_prod with the same tag reactjs but the script should rsync with the production server's /var/www/html/${CI_PROJECT_NAME} using SSH?
Set up another runner on production server and let it pick up the jobs with tags reactjs-prod and modify gitlab-ci.yml to have deploy_prod with the tag reactjs-prod?
You have a better way other than the 2 mentioned above?
Last question (related):
Where is the best place to install my runners? Is what I'm doing (Having my runners on my dev server) actually ok?
Please if you can explain to me the best way (that you would choose) with the reasons as in pros or cons I would be very grateful.
The best practice is to separate your CI/CD infrastructure from the infrastructure where you host your apps.
This is done to minimize the number of variables which can lead to problems with either your applications or your runners.
Consider the following scenarios when you have a runner on the same machine where you host your application: (The below scenarios can happen even if the runner and app are running in separate Docker containers. The underlying machine is still a single point of failure.)
The runner executes a CPU/RAM heavy job and takes up most of the resources on the machine. Your application starts experiencing performance problems.
The Gitlab runner crashes and puts the host machine in an inoperable state. (docker panic or whatever).
Your production app stops functioning.
Your app brakes the host machine (Doesn't matter how. It can happen), your CI/CD stops working and you can not deploy a fix to production.
Consider having a separate runner machine (or machines. Gitlab runner can scale horizontally), that is used to run your deployment jobs to both dev and production servers.
I agree with #cecunami's answer.
As an example, in our Org we have a dedicated VM only for the runner, which is explicitly monitored by our teams.
Since first creating the machine, the CPU, RAM and storage demand has grown massively, thus why the infrastructure is to be separated.
This question is about Jenkins CI configuration
We are working on small open source project. This project has following components:
A1: Core Platform Web Archive (WAR)
A2: Social Feed aggregation WAR (It has nothing to do with A1, A2 and A3)
A3: Transactional Platform WAR
Angular UI for A1
Angular UI for A3
A3 utilizes services exposed by A1 and A2 *
We would like to do automation for testing and planning to use Jenkins. (As of now testing is manual, and test cases are scripted in Testlink)
Here is my wish-plan
QA team sends command e-mail to build application bundle
App bundle has following items :
1 A1.war - this will be copied from predefined place (Can we specify
it in e-mail)
2 A2.war - this will be copied from predefined place (Can we specify
it in e-mail)
3 A3.war - This is maven project, will be compiled, packaged and
copied to standard destination
4 UI directories are copied
5 Once all of above steps are completed, one of our custom script
should be executed (it will prepare docker image and deploy on QA
server)
6 E-mail is sent with result
7 QA team starts testing on testlink
8 Results are logged and team is notified
How should we configure Jenkins for this. I read about Pipelines (scripted/declarative), and looks like closest choice.
Will be glad to hear openions and thanks in advance.
Best Regards
declarative pipelines are the newest and generally-recommended way to configure your jobs if you're just getting started with jenkins (if you had started with scripted, it's not always obvious that you must migrate).
i think you might be talking about triggering builds based on emails. i'd recommend avoiding that and instead having your QA folks go and kick off the builds via jenkins (UI or API) and specify any parameters unique to one build using parameters.
the email-ext plugin can send emails.
jenkins will log on every build so you'll have a record of what happened over time.
so you need to read a lot of docs, craft a Jenkinsfile, and keep iterating on it. there will be a bit of a learning curve, but you'll end up with your build completely codified, which will be a big help in terms of the long-term maintainability of your open source project. good luck!
I'm looking at implementing a blue/green deployment strategy. It will be for a database driven web application. We are using Teamcity and Octopus deploy currently.
To my knowledge, to achieve this strategy, the changes to the database need to be such that both versions of applications will continue to work, so in the case the of a rollback, the database changes don't need to be reverted.
I have read Octopus suggested implementation of this here.
My question:
Does anyone test the current active application in prod against the database changes prior to promoting to prod? E.g. In Test or UAT?
If so, how do you fit this requirement with the deployment strategy, especially when configuring it with Octopus?
Does anyone test the current active application in prod against database changes prior to promoting to prod.
Octopus lets you easily deploy the current live version to your pre-production environment, so you can test it against the upgraded database prior to deploying the upgraded database to your live server.
So if you have version 1 of your application live with database version a, and have version b of your database coming through, followed by version 2 of your application, you can test this in pre-production...
1 a - Same as live
1 b - First phase of testing (followed by release of `b`)
2 b - Second phase of testing (followed by release of `2`)
I am trying to do the MSI web deployment with chef. I have about 400 web servers with same configuration. We will do deployment in two slots with 200 servers each.
I will follow below steps for new release,
1) Increase the cookbook version.
2) Upload the cookbook to server.
3) Update the cookbook version to role and run list.
I will do lot of steps from cookbook like install 7 msi, update IIS settings, update web.configure file and add registry entry. Once deployment is done we need to update testing team, so that they can start the testing. My question is how could I ensure deployment is done in all the machines successfully? How could I find if one MSI is not installed in one machine or one web.config file is not updated properly?
My understanding is chef client will run every 30 Mins default, so I have wait for next 30 mins to complete the deployment. Is there any other way with push (I can’t use push job, since chef is removed push job support from chef High Availability servers) like knife chef client from workstation?
It would be fine, If anyone share their experience who is using chef in large scale windows deployment.
Thanks in advance.
I personnaly use rundeck to trigger on demand chef runs.
According to your description, I would use 2 prod env, one for each group where you'll bump the cookbook version limitation for each group separately.
For the reporting, at this scale consider buying a license to get chef-manage and chef-reporting so you'll have a complete overview, next option is to use a handler to report the run status and send a mail if there was an error during the run.
Nothing in here is specific to Windows, so more you are asking how to use Chef in a high-churn environment. I would highly recommend checking out the new Policyfile workflow, we've had a lot of success with it though it has some sharp limitations. I've got a guide up at https://yolover.poise.io/. Another solution on the cookbook/data release side is to move a lot of your tunables (eg. versions of things to deploy) out of the cookbook and in to a little web service somewhere, than have your recipe code read from that to get their tuning data. As for the push vs. pull question, most people end up with a hybrid. As #Tensibai mentioned, RunDeck is a popular push-based option. Usually you still leave background interval runs on a longer cycle time (maybe 1 or 2 hours) to catch config drift and use the push system for more specific deploy tasks. Beyond RunDeck you can also check out Fabric, Capistrano, MCollective, and SaltStack (you can use its remote execution layer without the CM stuffs). Chef also has its own Push Jobs project but I think I can safely say you should avoid it at this point, it never got enough community momentum to really go anywhere.
As per this question: Sonar throwing error BadDatabaseVersion it is not possible to run two sonar instances using the same database. Everything I've read so far implies the only solution is to shut down both instances and only restart the one you want to keep. Is my only other option to run two sonar instances, to have another sonar database? This seems pretty costly, and it seems that the only thing holding back sonar from running another instance is sharing server.core.id
So I guess I have two questions:
1) Why is sonar built with this dependency?
2) Are there any other options to run two instances on the same db?
Indeed, SonarQube currently can't have 2 servers started on the same DB. This limitation (that is referenced in this JIRA ticket) has beeen here by design since the very beginning - to make sure that you can't start 2 servers having a different set of plugins but pointing to the same DB.
To answer your second question, there's no way to have 2 instances pointing to the same DB. But we've starting a big refactoring to eventually make it possible to have a cluster of SonarQube instances, so feel free to watch the SONAR-5391 ticket and vote for it.