V10 Map node and database configuration by environment - ibm-integration-bus

I can't seem to find this in the docs.
If I have a flow with a Map node and that node has a database insert as one of its outputs, I can configure that just fine. What I can't figure out is how to change the database target when I go from environment to environment (dev to test to production). In v7 I could switch this with a property file and use the mqsibaroverride command but in v10, I no longer see the database instance name in the output of mqsireadbar.
Anyone know what the 'new' way to do this is?

https://www.ibm.com/support/knowledgecenter/en/SSMKHH_10.0.0/com.ibm.etools.mft.doc/cm28825_.htm
I found it. You have to use a JDBCProvider. Apparently, you can't use ODBC anymore.

Related

Query Wildfly for a value and then use that in a CLI script

I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.

Issue setting Jenkins environment variables on EC2-Fleet

We are having issues setting the Jenkins environment variables on our dynamic EC2-Fleet.
We already have a fixed master (linux) and a fixed Windows slave but wanted to add slaves dynamically when the load on the system becomes heavy.
For this we created a Spot Request Instance in AWS spinning up linux machines from an AMI and control this via the EC2-fleet-plugin in Jenkins.
Before this EC2-fleet can be of any help, our jobs must be able to run on its nodes.
Most of our jobs use Jenkinsfiles and need certain environment variables to be set but the EC2-fleet-plugin does not provide the possibility to set environment variables (https://issues.jenkins-ci.org/browse/JENKINS-36544).
As suggested on this ticket (JENKINS-36544), we tried to set the environment variables in "System Configuration" for the dynamic ec2 slaves and set the environment variables for the other nodes on the "Node Configuration" overriding the "System Configuration", or so we thought.
This should work if this bug wouldn't exist: https://issues.jenkins-ci.org/browse/JENKINS-44425. Because of this bug the "System Configuration" overrides the "Node configuration" instead of vice versa. So we can't use this as the existing nodes would not have the correct environment variables anymore.
As a last resort we tried to set the environment variables on the dynamic ec2 slaves by creating an /etc/profile.d/jenkinsvars.sh on the AMI used by the Spot Request Instance.
This script would be automatically run on login system wide (https://help.ubuntu.com/community/EnvironmentVariables#A.2Fetc.2Fprofile.d.2F.2A.sh).
Next to that we also attempted to set them in /home/ubuntu/.profile on the AMI singling out the ubuntu user which is the user running the Jenkins agent (https://help.ubuntu.com/community/EnvironmentVariables#A.2BAH4-.2F.profile).
But it appears Jenkins does not use these environment variables but its own...
A way that works is to adapt the jobs to load a groovy file that's part of the AMI to set the environment variables we need but that would mean to change almost all jobs we have, next to all Jenkinsfiles that are included in our repositories (Bitbucket project).
We would like to avoid this....
Try the following strategy:
Leverage User Data to run a shell script when the Spot Instance launches. It is the primary recommended way by Amazon and the plugin authors.
Instead of saving variables into the environment, have the user data script save them into either /var/tmp or /etc/profile or in parameter store. Refer to answers in this SO question. If you want to encrypt your info use KMS parameter store, if you dont care use one of the others.Choose one of the answers to best fit your needs.
Alter your Jenkins job to pause until your user data script has completed running (refer to the documentation from the plugin)
Change your Jenkins job to pick up the variables from the location you chose in step 2.
Try restarting the server environment.
Just saying.
So we can't use this as the existing nodes would not have the correct environment variables anymore.
Update your existing nodes to load the environment variables when they are provisioned / started, then remove them from the System configuration, then add them to the Node configuration.
You could also try setting the Slave command prefix field to ENV_VAR1=val1 ENV_VAR2=val2, although I haven't tried that.
Thirdly, you can try putting your variables directly into /etc/profile which should always be loaded no matter what user you are logging in as.
However, the easiest by far is to make all of your drones/agents exactly the same and set your environment variables in whatever scripts you run to build your projects. Use docker to pull dependencies to the agents as necessary during the job and to set up specific environments for your applications. This greatly simplifies the maintenance and configuration of your agents.
The Jenkins version or version of the EC2 plugin are missing in the question, but according to the description in this merged pull request, this bug should be fixed now: https://github.com/jenkinsci/ec2-plugin/pull/440#issuecomment-597160730
Jenkins version:
so this change works in both <=2.204, and >=2.205
EC2 plugin version: >=ec2-1.50
JENKINS-36544 - Fix Node Properties on Jenkins 2.205+ (#440) #jhansche
From the Pull Request description:
Navigate to the cloud configuration screen (this moves to a new page >=2.205)
Click "Add a new cloud"
Click "Amazon EC2"
Under the "AMIs" section, click "Add"
At the bottom of the AMI block, expand "Advanced"
Expect to see "Node Properties" block at the bottom of the block
Node Properties has the Environment variables.

Talend - Need server environment variables instead of local machine

I'm using talend enterprise edition, I'm trying to use system environment variables as parameters for my jobs. when using system.getenv("paramname") and runnig the job, I'm getting the values from my local machine. what I need to do to to get the values from the talend server machine. The idea is to centrally add all the parameters as environment variables in the talend server and all users should use those env variables as parameters. any input is appreciated.
Instead of system.getenv("paramname"), please use system.getProperty("paramname"). As system.getenv() is deprecated
Hope this helps...
This thread might also be of use as it could be used to accomplish similgar goals Reading properties from an external file . I included screenshots and description in that answer.
This is a similar approach, but allows placing a common.properties (or other named file) for all Talend jobs running on that same job server to use. This also makes it easy to have different Talend job servers (dev, qa, production, etc), where the same jobs are installed, but they pull their correct settings from the common property file (environment dependent).
It uses the components tFileInputDelimited and tContextLoad to accomplish the task.

Disabling/Pause database replication using ML-Gradle

I want to disable the Database Replication from the replica cluster in MarkLogic 8 using ML-Gradle. After updating the configurations, I also want to re-enable it.
There are tasks for enabling and disabling flexrep in ML Gradle. But I couldn't found any such thing for Database Replication. How can this be done?
ml-gradle uses the Management API to handle configuration changes. Database Replication is controlled by sending a PUT command to /manage/v2/databases/[id-or-name]/properties. Update your ml-config/databases/content-database.json file (example that does not include that property) to include database-replication, including replication-enabled: true.
To see what that object should look like, you can send a GET request to the properties endpoint.
You can create your own command to set replication-enabled - see https://github.com/rjrudin/ml-gradle/wiki/Writing-your-own-management-task
I'll also add a ticket for making official commands - e.g. mlEnableReplication and mlDisableReplication, with those defaulting to the content database, and allowing for any database to be specified.

How to install applications to a WebSphere 7.0 cluster using wsadmin?

I want to deploy to all four processes on a Websphere cluster with two nodes. Is there a way of doing this with one Jython command or do I have to call 'AdminControl.invoke' on each one?
Easiest way to install an application using wsadmin is with AdminApp and not AdminControl.
I suggest you download wsadminlib.py (Got the link from here)
it has a lot of functions, one of them is installApplication which works also with cluster.
Edit:
Lately I found out about AdminApplication which is a script library included in WAS 7 (/opt/IBM/WebSphere/AppServer/scriptLibraries/application/V70)
The docuemntation is not great in the info center but its a .py file you can look inside to see what it does.
It is imported automatically to wsadmin and you can use it without any imports or other configuration.
Worth a check.
#aviram-segal is right, wsadminlib is really helpful for this.
I use the following syntax:
arg = ["-reloadEnabled", "-reloadInterval '0'", "-cell "+self.cellName, "-node "+self.nodeName, "-server '"+ self.serverName+"'", "-appname "+ name, '-MapWebModToVH',[['.*', '.*', self.virtualHost]]]
AdminApp.install(path, arg)
Where path is the location of your EAR/WAR file.
You can find documentation here

Resources