How to get all session details from PMCMD in informatica when the workflow is a concurrent workflow - putty

I have configured the workflow as a concurrent workflow with same instance name and tried to retrieve the details using getsessionstatistics/gettaskdetails. Since it is a concurrent workflow, I am not able to get the details of all session using PMCMD in a shell script. And I saw one command in Informatica documentation as "getTaskDetailsEx". If I run this in putty, it is showing ERROR: Unknown command [gettaskdetailex]. I have tried with all lower cases also.
Can someone please suggest how to get details using "gettaskdetailex" or any other way to get all session details of concurrent workflow

Related

Using SSHMon plugin with Jmeter- Plugin not capturing any stats

I have been working on Jmeter from quite sometime now and I have been trying to use Jmeter Plugin SSHMon , but I am stuck as even after configuring it completely it simply says "Waiting for samples" and does not render anything on the graph.
I am trying to execute the command on the Linux box and have passed all the relevant parameter for collecting the stats. But still I am not able to capture anything. Any help or pointer will be appreciated.
I also tried connecting the Linux box using Putty and executing the command and the command does work, but when I execute the test the Plugin does not capture anything
Please find the ScreenShot attached
In the majority of cases the answer lives in jmeter.log file, check it for any suspicious entries, if something is not working most probably there will be a cause identifier there. Also make sure to actually run your test as SSHMon is a Listener and relies on Sampler Results so if your test is not running - it will not show anything.
As an alternative you can use JMeter PerfMon Plugin which has EXEC metric so you can collect the same numbers, however PerfMon will require Server Agent to be up and running on the remote Linux system.
After a lot of trail and error I was able to get SSHMon working. Please find the solution below
Ok Guys, so its a lot tricky as you would expect. So I thought that installing the Perfmon Agent on the server made Jmeter collect the stats for SSHMon listner but there is a catch to it. To start off I will let you know that installing the Perfmon Agent on the servers and then using the plugin to collect the stats works smooth. You can definately use this option. But it requires for the Agent to be started everytime you want to run a test and if there are multiple servers you will have to restart on those server. Not sure if there is a way to automate the restart of the agent or to keep it running for a longer time. If you are lazy like me or you have installation restriction on the servers or hell bent on using SSHMon then what you need to do is stated below.
You should always start Jmeter with the command line argument --->
jmeter -H "Proxy" -P "Port" -u "UserName" -a "Password"
The arguments are self explanatory. Once you do that Jmeter will be launched, but wait its not done yet!!
When you start executing your test the command prompt in which you have started the Jmeter will prompt for Kerberos UserName [YourUsername]: you have to again Enter you username here, which you use to start Jmeter or login to you system. Followed by this it will prompt you to enter kerberos Password for your UserName: Enter Your Password and Voila!!
The thing is, it happens in the background so you never see what is happening on the Command Prompt you used to start Jmeter.
Please see below for more clarity.
Kerberos Username[UserName]: UserName Kerberos
Password for UserName: Password
I have attached the screen shot as well in the question as well as here showing the issue being resolved. Please refer "Solution ScreenShot". Cheers!!
Hope this helps Guys!! :)
Also please hit up for the answer if it helps you!! :)

how to schedule the shell script using Google Cloud Shell?

I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)
As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage
You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.
To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.

how to do deployment of Hive script in mutliple environments

Please help me in answering below questions.
What is deployment strategy for Hive related scripts. Like For SQL we have dacpac, Is there any such components ?
Is there any API to get status of Job submitted through ODBC.
Have you looked at Azure Data Factory: http://azure.microsoft.com/en-us/services/data-factory/
Regarding your questions on APIs to check job status, here are a few PowerShell APIs. Do these help you?
“Start-AzureHDInsightJob” (https://msdn.microsoft.com/en-us/library/dn593743.aspx) starts the job and returns a job object which can be used to track/kill the job.
“Wait-AzureHDInsightJob” (https://msdn.microsoft.com/en-us/library/dn593748.aspx) uses the job object to check the status of the job. It will wait until the job completes or the wait time is exceeded.
“Stop-AzureHDInsightJob” (https://msdn.microsoft.com/en-us/library/dn593754.aspx) stops the job.

Open a JDBC connection in a specific AS400 subsystem

I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.

How to run only failed sessions in a workflow

In a workflow there are sessions connected in parallel and in sequence. Suppose some sessions which are in parallel and in sequential mode are failed, How do I restart the workflow with only failed sessions. How can I design this in Informatica?
Turn 'suspend on error' for workflow
Turn 'restart on recovery' for each session in workflow
Now if any session fail workflow will be suspended until you fix the problem and hit recover on workflow in monitor. When you do so it cause to restart only failed sessions.
A large publishing client asked us to implement something similar to what you asked. We crated a database table to keep track of successful sessions within a workflow. Each session will have a mapping at the end that adds an entry to database which says I passed or failed. When we try to run in a recovery mode we query the database at the beginning of each session to find out if we need to run this session or not.
We also provided a web interface to this table where business users can manually choose which session to run or escape based on their needs.
Recovery option will work only if you have "workflow recovery" turned on in repository. If you dont, then you can check option "fail workflow if task fails" at individual session level and create condition on link that connect workflow to each other. Disadvantage of this method is that your workflow will appear failed and wont execute next sessions until failed one are fixed.
thanks.

Resources