I'm using talend enterprise edition, I'm trying to use system environment variables as parameters for my jobs. when using system.getenv("paramname") and runnig the job, I'm getting the values from my local machine. what I need to do to to get the values from the talend server machine. The idea is to centrally add all the parameters as environment variables in the talend server and all users should use those env variables as parameters. any input is appreciated.
Instead of system.getenv("paramname"), please use system.getProperty("paramname"). As system.getenv() is deprecated
Hope this helps...
This thread might also be of use as it could be used to accomplish similgar goals Reading properties from an external file . I included screenshots and description in that answer.
This is a similar approach, but allows placing a common.properties (or other named file) for all Talend jobs running on that same job server to use. This also makes it easy to have different Talend job servers (dev, qa, production, etc), where the same jobs are installed, but they pull their correct settings from the common property file (environment dependent).
It uses the components tFileInputDelimited and tContextLoad to accomplish the task.
Related
I'm installing the latest MMASetup-AMD64.exe and want to hook up to Log Analytics AND SCOM. But, I'm having trouble finding the command line parameters for SCOM. Does anybody know them? The Log Analytics ones are well documented and are here:
ADD_OPINSIGHTS_WORKSPACE=1
OPINSIGHTS_WORKSPACE_ID="1234"
OPINSIGHTS_WORKSPACE_KEY="5678"
I need the equivalent parameters for management group name and management server. Effectively completing these boxes but via the command line.
Thanks in advance.
I believe parameters for management group, secure port, etc. are not available with MMASetup-AMD64.exe. Here are the supported command line options with it. So may be, if its feasible in your environment and setup, try to use MOMAgent.msi to install agent manually or to deploy System Center Operations Manager agents from the command line or by using the Setup Wizard. The parameters like MANAGEMENT_GROUP, SECURE_PORT, etc. are all explained along with examples in this document. For more information, please refer it.
Other references related to OM agents and OM groups:
Process manual agent installations
Configuring Windows agents
Operations Manager agents
Creating and managing groups
Connecting management groups in Operations Manager
Planning a Management Group Design
We are having issues setting the Jenkins environment variables on our dynamic EC2-Fleet.
We already have a fixed master (linux) and a fixed Windows slave but wanted to add slaves dynamically when the load on the system becomes heavy.
For this we created a Spot Request Instance in AWS spinning up linux machines from an AMI and control this via the EC2-fleet-plugin in Jenkins.
Before this EC2-fleet can be of any help, our jobs must be able to run on its nodes.
Most of our jobs use Jenkinsfiles and need certain environment variables to be set but the EC2-fleet-plugin does not provide the possibility to set environment variables (https://issues.jenkins-ci.org/browse/JENKINS-36544).
As suggested on this ticket (JENKINS-36544), we tried to set the environment variables in "System Configuration" for the dynamic ec2 slaves and set the environment variables for the other nodes on the "Node Configuration" overriding the "System Configuration", or so we thought.
This should work if this bug wouldn't exist: https://issues.jenkins-ci.org/browse/JENKINS-44425. Because of this bug the "System Configuration" overrides the "Node configuration" instead of vice versa. So we can't use this as the existing nodes would not have the correct environment variables anymore.
As a last resort we tried to set the environment variables on the dynamic ec2 slaves by creating an /etc/profile.d/jenkinsvars.sh on the AMI used by the Spot Request Instance.
This script would be automatically run on login system wide (https://help.ubuntu.com/community/EnvironmentVariables#A.2Fetc.2Fprofile.d.2F.2A.sh).
Next to that we also attempted to set them in /home/ubuntu/.profile on the AMI singling out the ubuntu user which is the user running the Jenkins agent (https://help.ubuntu.com/community/EnvironmentVariables#A.2BAH4-.2F.profile).
But it appears Jenkins does not use these environment variables but its own...
A way that works is to adapt the jobs to load a groovy file that's part of the AMI to set the environment variables we need but that would mean to change almost all jobs we have, next to all Jenkinsfiles that are included in our repositories (Bitbucket project).
We would like to avoid this....
Try the following strategy:
Leverage User Data to run a shell script when the Spot Instance launches. It is the primary recommended way by Amazon and the plugin authors.
Instead of saving variables into the environment, have the user data script save them into either /var/tmp or /etc/profile or in parameter store. Refer to answers in this SO question. If you want to encrypt your info use KMS parameter store, if you dont care use one of the others.Choose one of the answers to best fit your needs.
Alter your Jenkins job to pause until your user data script has completed running (refer to the documentation from the plugin)
Change your Jenkins job to pick up the variables from the location you chose in step 2.
Try restarting the server environment.
Just saying.
So we can't use this as the existing nodes would not have the correct environment variables anymore.
Update your existing nodes to load the environment variables when they are provisioned / started, then remove them from the System configuration, then add them to the Node configuration.
You could also try setting the Slave command prefix field to ENV_VAR1=val1 ENV_VAR2=val2, although I haven't tried that.
Thirdly, you can try putting your variables directly into /etc/profile which should always be loaded no matter what user you are logging in as.
However, the easiest by far is to make all of your drones/agents exactly the same and set your environment variables in whatever scripts you run to build your projects. Use docker to pull dependencies to the agents as necessary during the job and to set up specific environments for your applications. This greatly simplifies the maintenance and configuration of your agents.
The Jenkins version or version of the EC2 plugin are missing in the question, but according to the description in this merged pull request, this bug should be fixed now: https://github.com/jenkinsci/ec2-plugin/pull/440#issuecomment-597160730
Jenkins version:
so this change works in both <=2.204, and >=2.205
EC2 plugin version: >=ec2-1.50
JENKINS-36544 - Fix Node Properties on Jenkins 2.205+ (#440) #jhansche
From the Pull Request description:
Navigate to the cloud configuration screen (this moves to a new page >=2.205)
Click "Add a new cloud"
Click "Amazon EC2"
Under the "AMIs" section, click "Add"
At the bottom of the AMI block, expand "Advanced"
Expect to see "Node Properties" block at the bottom of the block
Node Properties has the Environment variables.
I can't seem to find this in the docs.
If I have a flow with a Map node and that node has a database insert as one of its outputs, I can configure that just fine. What I can't figure out is how to change the database target when I go from environment to environment (dev to test to production). In v7 I could switch this with a property file and use the mqsibaroverride command but in v10, I no longer see the database instance name in the output of mqsireadbar.
Anyone know what the 'new' way to do this is?
https://www.ibm.com/support/knowledgecenter/en/SSMKHH_10.0.0/com.ibm.etools.mft.doc/cm28825_.htm
I found it. You have to use a JDBCProvider. Apparently, you can't use ODBC anymore.
We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").
I need to display all existing environment variables for snapshots installed on BPM. Is there a way I can do this using the wsadmin command ?
I don't think we have a wsadmin command to display all existing variables for a snapshot. If this is something that would be useful, I would suggest opening a Request for Enhancement(RFE) with BPM development for their consideration. Here is a link on how to do this:
https://developer.ibm.com/answers/questions/175980/how-do-i-submit-an-enhancement-request-or-rfe-for.html
Thanks!
I agree with Paula, there is no wsadmin cmd to display env variables.
However, you can check out:
BPMSetEnvironmentVariable:
https://www.ibm.com/support/knowledgecenter/SSFPJS_8.5.6/com.ibm.wbpm.ref.doc/topics/rref_bpmsetenvironmentvariable.html
And REST call to get env variable:
https://www.ibm.com/support/knowledgecenter/SSV2LR/com.ibm.wbpm.ref.doc/rest/bpmrest/rest_bpm_wle_v1_system_env_variable_get.htm
This can be achieved by the BPM REST Interface APIs.
Use this API to retrieve the list of process applications, in which you can find the ID of the snapshot you are interested in.
https://<bpm_host_or_ip>:9443/rest/bpm/wle/v1/processApps
Use this API to retrieve the envrionment variables and their default values.
https://<bpm_host_or_ip>:9443/rest/bpm/wle/v1/processAppSettings?snapshotId=2064.11a398d0-c6b8-41e4-b8eb-daaef864be14"
You can easily use jq in a Linux environment to parse out the information you are interested in.
Finally use this API to retrieve the current value of a given environment variable.
https://<bpm_host_or_ip>:9443/rest/bpm/wle/v1/system/env/variable?processAppAcronym=<APP_ACRONYM>&name=<ENV_VAR_NAME>