We have number of jobs configured in Controlm and now we got new environment to configure same set of jobs,
Is there any way to create controlm jobs from backend server by writing shell or any scripts?
Any other possibilities to avoid spending time in creating same jobs again and again for each environment??
Control-M provides the functionality to export/import jobs across environments : https://communities.bmc.com/docs/DOC-50416
There are a couple of ways:
From the Planning domain, load the job folders you want to migrate into a blank workspace.
from the top bar then click Export and save. This will save a .xml file you can edit using a text editor and then Import Workspace and load - you can import the .xml into the same Control-M if environments are on one CTM EM Server, or a different CTM EM Server.
OR
From the planning Domain, load the job folders you want to migrate, right click on the folder and select Duplicate, then through the GUI update the new folder (If doing this I'd then unload the source folder just to ensure nothing is overwritten there). You can use the Find & Update option. This will only work if your environments are on the same CTM EM server.
Related
I want to add 3 extension to nifi (nifi-encryptMD5-nar-1.0.nar-unpacked,nifi-getOperator-nar-1.0-SNAPSHOT.nar-unpacked,nifi-splitAttributeValue-nar-1.0.nar-unpacked)
I added the extensions folder in the directory /opt/nifi/nifi-1.9.2/work/nar/extensions/
then when I restart the nifi service, nifi turns off and does not turn on, when I force the start with the user nifi, nifi turns on but the extentions have been deleted from the directory /opt/nifi/nifi-1.9.2/work/nar/extensions/
you have to put *.nar packages into nifi/lib directory.
nifi will extract it automatically on startup into nifi/work folder.
As daggett says, you need to use the .nar files, not any unpacked directories.
In your nifi.properties there will be two or more properties that provide locations for NiFi libraries:
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.library.directory.<something>=./<yourdir>
The first is the default and contains all the basic NiFi files. It is only checked on startup and any valid nars found are unpacked in the work directory and loaded. Generally you don't want to add anything here except in test environments as it complicates upgrades.
The second is empty by default but it is scanned every 30 seconds for new .nars. These will be unpacked and loaded if possible, but only for new libraries. Already loaded libraries will not be reloaded.
This is a good location to add your validated custom libraries without having to restart NiFi.
The third and further need to be added manually to the properties file. These are loaded on startup only and useful if you have a lot of custom processors and want to keep them organized.
In your situation I'd put the .nars in the extensions folder and check the logs to see if they were loaded successfully. You'll then need a full refresh of the browser window (Shift+F5 I think) before they show up in the list of processors.
In a cluster setup, add the .nars on all nodes and verify their availability before trying to add them to the canvas or things might get messy.
Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server
Looking to setup a high performance environment running Mongo 3.4 on windows 2016 in azure. I come from a SQL\windows background and was wondering if there are any options with Mongo to spread out the IO workload of mongod. It seems odd that there is only a dbPath option and that you can not configure separate locations for the DB(s), opslogs and journal. Am i missing something ?
Thanks for any assistance
This is indeed possible, using a couple of different techniques:
The oplog is stored in the local database, so you can keep it in a separate folder by using the storage.directoryPerDB config option.
The journal is stored in a subfolder of the data directory; you can make MongoDB save its journal files in a separate directory by preparing a symbolic link called journal in the data directory, pointing to your other folder.
I have an existing spark-job, the functionality of this spark-job is to connect kafka-server get the data and then storing the data into cassandra tables, now this spark-job is running on server inside spark-2.1.1-bin-hadoop2.7/bin but whenever I am trying to run this spark-job from other location, Its not running, this spark-job contains some JavaRDD related code.
Is there any chance, I can run this spark-job from outside also by adding any dependency in pom or something else?
whenever I am trying to run this spark-job from other location, Its not running
spark-job is a custom launcher script for a Spark application, perhaps with some additional command-line options and packages. Open it, review the content and fix the issue.
If it's too hard to figure out what spark-job does and there's no one nearby to help you out, it's likely time to throw it away and replace with the good ol' spark-submit.
Why don't you use it in the first place?!
Read up on spark-submit in Submitting Applications.
A newbie question on Sprint Batch Admin.
My requirement is that the user should be able to schedule new jobs (passing some parameters for the job functionality) through a web UI. These jobs should be persistent, will be repetitive and could be cancelled or deleted. Also, a report could be generated for last run jobs and to list all the existing jobs with their next run dates.
Perhaps my most important requirement is that this should be possible "on the fly", not requiring redeploying the web-application or a server re-start.
Can this be done using Spring Batch Admin (I see that the guide talks about uploading an XML for adding a job but that seems tedious, if there is an API why shouldn't we be able to create a job on the fly through the Batch Admin Web UI)? Or does JDK Timer or Quartz support it?
Once a job has been created, it can't be deleted, but it can be stopped. Allowing deletion from DB is a risky operation, as Spring Batch might have already been started the job execution, but the DB has not been updated yet. If one removes the job at this moment, you have inconsistency.
Scheduling a new job is described in Launch Job. It is not possible to create new types of jobs, as jobs can generally have complicated configuration which is parsed only once when Spring Context is loaded.
Dynamic deployment (on the fly) of jobs and configurations, without requiring server restart, is a feature we implemented in Trooper Batch Profile - it is not exactly Spring Batch admin but builds on it. You continue to write your jobs using Spring batch, just the container changes for in Trooper you would use its Batch profile runtime. Screen shots and features are here : https://github.com/regunathb/Trooper/wiki/Writing-Batch-jobs-in-Trooper
I think we can deploy the each spring batch job by a SBA. I mean each batch job will be compiled as a war file. We deploy them together in server. In this way, we have the following visiting urls to monitor each jobs:
h t t p://bactchjobserver/job1
h t t p://bactchjobserver/job2
h t t p://bactchjobserver/job3
h t t p://bactchjobserver/job4
But the downside is that each war fill surely contains lib files, which make each war file like 10MB size.
At the same time, I tried to manually add new-job.xml to war-file\WEB-INF\classes\META-INF\spring\batch\jobs, and new-job.jar to war-file\WEB-INF\lib without stopping JBoss. It works. The new job can be showed in SBA UI and runnable.
But obviously this would lead much maintenance and trouble shooting. It is not implementable.