I've got an SSIS package (targeting SQL Server 2012) and I'm currently debugging it. What I'm after is how to log that the SSIS package has finished or stopped by any methods.
The closest ones look to be 'OnExecStatusChanged', 'OnPostExecute', and 'OnPostValidate' however NONE of these provide any log messages when I break execution in Visual Studio.
Here's a screenshot:
I suspect the answer may be "you can't", but I want to see if there are perhaps more exotic solutions before I give up.
You do have two options available that I can think of.
One has been highlighted above in using the pre and post execute functions. If you were to use this solution I would recommend using a table (Dim_Package_Log?) and inserting to this table using a stored procedure on pre and post execute.
Clarification: This wont catch package breaks, just start, end and errors.
As you rightly identified though this would not record package breaks. To capture this I have implemented a view that utilises two tables.
SSISDB.catalog.event_messages
SSISDB.catalog.executions
If you do some "exotic" joins you can utilise the execution_status from executions and the messages from event_messages to find the information you want.
I can't remember which msdn page I found it, but this is what the execution_status means in catalog.executions
The possible values are created (1), running (2), canceled (3), failed (4), pending (5), ended unexpectedly (6), succeeded (7), stopping (8), and completed (9).
Clarification:
Below is a sample line of what SSISDB.catalog.executions outputs for each package execution from a Job:
43198 FolderName ProjectName PackageName.dtsx NULL NULL NULL NULL 10405 GUID SERVICEACCOUNTNAME 0 200 2015-02-16 00:00:03.4156856 +11:00 20 18 7 2015-02-16 00:00:05.4409834 +11:00 2015-02-16 00:00:58.4567400 +11:00 GUID SERVICEACCOUNTNAME 10324 NULL NULL ID SERVER SERVER 16776756 3791028 20971060 8131948 2
In this example there is a column with a value of 7. As detailed above this status changes based upon the end state of the execution of the package. In this case, successful. If the package breaks midway it will be captured in this status.
Further information regarding this ssidb capability is located at this MSDN page.
I know that is a partly answer. What is covered here is to detect that a package is finished by an error of success. This you can do by calling the package from an another parent package.
But if the package is forced to stop then this won't have any effect.
Related
Edit 2
It was a Microsoft bug. My CRM updated recently and the query is now executing as expected
Server version: 9.1.0000.21041
Client version: 1.4.1144-2007.3
Edit
If it is a Microsoft bug, which looks likely thanks to Arun's research, then for future reference, my CRM versions are
Server version: 9.1.0000.20151
Client version: 1.4.1077-2007.1
Original question below
I followed the example as described in the MSDN Documentation here.
Specify a positive number that indicates the number of entity records to be returned per page. If you do not specify this parameter, the value is defaulted to the maximum limit of 5000 records.
If the number of records being retrieved is more than the specified maxPageSize value or 5000 records, nextLink attribute in the returned promise object will contain a link to retrieve the next set of entities.
However, it doesn't appear to be working for me. Here's my sample JavaScript code:
Xrm.WebApi.retrieveMultipleRecords('account', '?$select=name', 20).then
(
result => console.log(result.entities.length),
error => console.error(error.message)
);
You can see that my query doesn't include any complex filter or expand expressions
maxPageSize is 20
When I run this code, it's returning the full set of results, not limiting the page size at all:
I noticed this too, but this happens only in UCI. Whereas this issue wont be reproduced when you run the same code in classic web UI.
Probably this is a bug in MS side, pls create a ticket so they can fix it.
UCI
Classic
Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.
I'm currently coding an app that utilizes Parse as a backend, but have run into a '124' error. I admit that I do a lot in my cloud functions, but, from what I've observed, it doesn't appear over 15 seconds. Could someone please confirm this? Below is the output.
E2015-03-06T03:49:52.644Z] v286: Ran cloud function createEvent for
user puZNjFVfSm with:
Input:
{"RSVPDate":{"__type":"Date","iso":"2015-03-06T04:49:52.000Z"},"description":"Sample event to showcase
functionality","group":{"max":5,"min":4},"max":50,"reoccur":{"day":1,"month":1,"stop":{"__type":"Date","iso":"2015-03-06T04:49:52.000Z"},"week":1},"title":"SampleFCFS"}
Failed with: Execution timed out I2015-03-06T03:49:52.716Z] begin
I2015-03-06T03:49:52.717Z] creating Event - initial checks completed
I2015-03-06T03:49:52.718Z] Finished advanced checks
I2015-03-06T03:49:52.719Z] Event creation start
I2015-03-06T03:49:52.770Z] begin event creation
I2015-03-06T03:49:52.873Z] Finding role: company_employee_z0Zx39OyuY
I2015-03-06T03:49:52.875Z] Added and secured event
I2015-03-06T03:49:52.931Z] attaching role to 425Qy9v9e4
I2015-03-06T03:49:52.934Z] Adding participant
From what I can tell, it looks like I'm only getting 300Z (is that milliseconds?) on all my runs. Shouldn't I be getting 15 seconds?
Update: I found that the issue was caused by using the addUnique function of Parse Objects with an array of pointers. By inserting ids instead of pointers, the issue was resolved.
Thank you for your help.
Hi Can you tell which way to drive the analysis for my issue opmn.log is not getting updated. All the instances are working fine and individual instances logs are getting updated but $ORACLE_HOME/opmn/logs/opmn.log is always 0kb since a week time. I could not find any statements reg opmn.log on opmn.xml.
There is a difference in Logging mechanism between (9.0.2 to 10.1.2) and (10.1.3 to later).
(9.0.2 to 10.1.2) log-file path="x" level="x" rotation-size="x"
(10.1.3 to later) log path="x" comp="x;y;z" rotation-size="x"
(9.0.2 to 10.1.2) :
ORACLE_HOME/opmn/logs/ipm.log:
Review the error codes and messages that are shown in the ipm.log file. The PM portion of OPMN generates and outputs the error messages in this file. The ipm.log file tracks command execution and operation progress. The level of detail that gets logged in the ipm.log can be modified by configuration in the opmn.xml file. Refer to Chapter 3, "opmn.xml Common Configuration" for examples of debug levels.
ORACLE_HOME/opmn/logs/ons.log:
Use the ons.log file to debug the ONS portion of OPMN or for early OPMN errors. The ONS portion of OPMN is initialized before PM, and so errors that occur early in OPMN initialization will show up in the ons.log file.
ORACLE_HOME/opmn/logs/opmn.log:
The opmn.log file contains output generated by OPMN when the ipm.log and ons.log files are not available. Typically, the only output written to the opmn.log file will be the exit status of a child OPMN process. A status code of 4 indicates a normal reload of OPMN. All other status codes indicate an abnormal termination of the child OPMN process.
ipm.log(10.1.2) similar to opmn.log(10.1.3)
ons.log(10.1.2) similar to opmn.dbg(10.1.3)
I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.