I have a Pentaho(kjb) which needs to be scheduled through Control-M, can someone help in this regard?
Try Control-M application integrator , you can define custom type job using it .
as question is on liner i dont have much insight of what function you want to trigger , However recently i worked on BluePrism type job integration in Control--M using it.
also if your product support API you can integrate using web-service module of Control-M job type or can directly call using any script/program to schedule via Control-M .
Check below link for various available App integrator type job already created by Community .
https://communities.bmc.com/groups/control-m-application-hub/content
Regards,
Mani
Related
Good morning everyone. I hope you'r all keeping safe and staying in home. So my problem is:
I have a nifi project
InvokeHttp is doing the "POST" methode and the generateflow processor contain the body of the post methode
all I need to know is how to make this project runs one time : i.e: REST API needs to create one user I need it to be stopped once a user is created ! It's like running this project one and only one time !
Is it possible ?? how can we do it ?
1111111110sec - Run Schedule
This will make sure to run the processor only once.
enter image description here
We have talend-jobs triggered within Spring-boot application. Is there any way to configure the output of talend-jobs to the application log files?
One workaround we find is to write logs directly to an external file (filePath passed as context-param). But wanted to find if there is a better way to configure this seamlessly.
Not sure if I understood the question correctly, but I guess your concerns might be on what might have happened to the triggered Jobs.
Logging
With Respect to Logging for Talend, You could configure using Log4j,
https://help.talend.com/reader/5DC~TBhDsBie5JTXyVLW4g/QSGCZJKXo~uhKvZDq1DxUg
Monitoring
Regarding the Status of the Job Executed, you could get the execution details retrieved using REST Call(Talend Metaservlet API).
getTaskExecutionStatus
https://help.talend.com/reader/oYf9gKhmYrkWCiSua4qLeg/SLiAyHyDTjuznLR_F~MiQQ
By Modifying the Existing Talend Job,You could also design a like a feedback loop, ie Trigger a REST Call back to your application. With the details of Execution from Talend Job.
I'm deploying a project with IIB.
The good feature is Integration Serivce, but I dont know how to save log before and after each operation.
So can any one know how to resolve that ?
Tks !
There are three ways in my project. Refer to the following.
Code Level
1.JavaComputeNode (Using log4j )
Flow Level
1.TraceNode
2.Message Flow Monitoring
In addition to the other answers there is one more option, which I often use: The IAM3 SupportPac
It adds a log4j-Node and also provides the possibility to log from esql and java compute nodes.
There are two ways of doing this:
You can use Log Node to create audit logging. This option only store in files and the files are not rotatives
You can use the IBM Integrated Monitor these events to create a external flow that intercepts messages and store this message in the way you prefer
Please help me in answering below questions.
What is deployment strategy for Hive related scripts. Like For SQL we have dacpac, Is there any such components ?
Is there any API to get status of Job submitted through ODBC.
Have you looked at Azure Data Factory: http://azure.microsoft.com/en-us/services/data-factory/
Regarding your questions on APIs to check job status, here are a few PowerShell APIs. Do these help you?
“Start-AzureHDInsightJob” (https://msdn.microsoft.com/en-us/library/dn593743.aspx) starts the job and returns a job object which can be used to track/kill the job.
“Wait-AzureHDInsightJob” (https://msdn.microsoft.com/en-us/library/dn593748.aspx) uses the job object to check the status of the job. It will wait until the job completes or the wait time is exceeded.
“Stop-AzureHDInsightJob” (https://msdn.microsoft.com/en-us/library/dn593754.aspx) stops the job.
I was wondering if it is possible to list all running jobs in the resource manager, using the DRMAA library, not just the ones started via DRMAA itself?
That is, getting data similar to what is output by the squeue command for the SLURM resource manager.
As far as I know, yes, it is, but only for DRMAAv2, which implements listing and job persistence:
https://github.com/troeger/drmaav2-mock/blob/master/drmaa2-list.c
The python-drmaa module does not implement DRMAAv2 yet, but we might start working soon on it:
https://github.com/drmaa-python
If you want to jump in, you're very welcome! ;)