I've been searching on internet and ESQL/WebSphere MessageBroker documentations, to find a way of printing a variable's value so that i can trace it in Broker Logs. Like System.out.println() in java.
I can't debug the messageflow because of some technical issues, so could you please suggest me how to do it or any workarounds.
UserTrace is supposed to fulfil this role for ESQL but if UserTrace isn't helping then I see a lot of people use static calls out from ESQL to java whcih are then logged.
The java code could be as simple as writing to stdout (which will go in /var/mqsi/components//stdout) but more commonly I see this pattern used with existing java logging frameworks like log4J.
The advantage of this approach is that you unify logging between your JCN's and ESQL compute nodes.
User Trace should suffice your need, Put a trace node at the point where you want to log, select the file trace and give a file path,
pattern give as:
${CURRENT_TIMESTAMP}
${Root}
${Environment}
${LocalEnvironment}
${ExceptionList}
so it logs everything.
If it's in higher environments, then you have to use the mqsichangetrace command to enable the trace on flow.
Probably the easiest way is:
1) Set a temporary location in the Environment to the value of the variable: SET Environment.temp = yourVar ;
2) Subsequently, in a Trace node, set the Pattern on the Basic tab of the Trace node to that temporary location: ${Environment.temp}
3) Configure the Trace node to print to a File, User Trace, or the Local Error Log.
4) Deploy and run your flow. Then look in the output of the Trace node.
Related
We have an rsyslog configured to receive messages from multiple sources on different ports.
Messages are then assigned to different action rulesets depending on the incoming port.
We have noticed that sometimes (but not systematically), after an rsyslog restart, there are error logged in /var/log/messages with content like
"2022-08-16T16:46:26.841640+02:00 mysyslogserver rsyslogd: msg: ruleset ' 6È B ' could not be found and could not be assgined to message object. This possibly leads to the message being processed incorrectly. We cannot do anything against this, but wanted to let you know. [v8.32.0 try http://www.rsyslog.com/e/3003 ]"
The name of ruleset is changing every time and seems to be a random binary string. Such message is logged several thousands of time (with same ruleset name), at a rate which often exceeds ratelimit for internal messages.
(And of course we don't have rulesets with such names in our config file... )
Would you know what could be the cause of such issue ? Is it a bug ?
Note that in some rulesets we use "call" statement to call sub-rulesets, but we don't use "call_indirectly".
Thanks in advance for any help.
S.Hemelaer
Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.
I have 17 tests in a .jmx that I call from Jenkins. Out of this, 15 runs correctly and return meaningful results. However, the 2 last resturns with a run time of 0ms. I looked at the logs at there is no exception. My question is where (which log file) can I look in this case?
There are different options to detect that your test is doing what it should do.
Assertions
Result File
Log file
The easiest one is using Assertions to check sampler response data. The most commonly used is Response Assertion
Next one is configuring JMeter to save request and response fields you're interested in. See properties, starting from jmeter.save.saveservice in jmeter.properties file. Uncomment and set to true those, you feel may help you to get to the bottom of the issue
The next one is the most flexible and informative. I would suggest to use some logging to get things sorted out. JMeter writes a file called jmeter.log. You can add Beanshell Pre Processor and Beanshell Post Processor to the sampler which run time is 0ms with something like:
In Post Processor:
log.info("Starting test " + sampler.getName() + " at " + new Date());
In Post Processor:
log.info("Response code " + prev.getResponseCode());
log.info("Response message " + prev.getResponseMessage());
log.info("Execution time " + prev.getTime());
etc. You can even see full response data in the log as
log.info(new String(data));
See How to use BeanShell guide for more details on advanced Beanshell scripting.
All stdout and stderr output from the Jenkins build will go to the console output at [Jenkins URL]/job/[job name]/[build number]/console
If you don't see anything useful there, the best approach is to run the JMeter tests outside Jenkins, but on the same machine and under the same user environment that Jenkins would use. You may need to enable some extra debugging info; see this link for some tips.
I am in need to hook a custom execution hook in Apache Hive. Please let me know if somebody know how to do it.
The current environment I am using is given below:
Hadoop : Cloudera version 4.1.2
Operating system : Centos
Thanks,
Arun
There are several types of hooks depending on at which stage you want to inject your custom code:
Driver run hooks (Pre/Post)
Semantic analyizer hooks (Pre/Post)
Execution hooks (Pre/Failure/Post)
Client statistics publisher
If you run a script the processing flow looks like as follows:
Driver.run() takes the command
HiveDriverRunHook.preDriverRun()
(HiveConf.ConfVars.HIVE_DRIVER_RUN_HOOKS)
Driver.compile() starts processing the command: creates the abstract syntax tree
AbstractSemanticAnalyzerHook.preAnalyze()
(HiveConf.ConfVars.SEMANTIC_ANALYZER_HOOK)
Semantic analysis
AbstractSemanticAnalyzerHook.postAnalyze()
(HiveConf.ConfVars.SEMANTIC_ANALYZER_HOOK)
Create and validate the query plan (physical plan)
Driver.execute() : ready to run the jobs
ExecuteWithHookContext.run()
(HiveConf.ConfVars.PREEXECHOOKS)
ExecDriver.execute() runs all the jobs
For each job at every HiveConf.ConfVars.HIVECOUNTERSPULLINTERVAL interval:
ClientStatsPublisher.run() is called to publish statistics
(HiveConf.ConfVars.CLIENTSTATSPUBLISHERS)
If a task fails: ExecuteWithHookContext.run()
(HiveConf.ConfVars.ONFAILUREHOOKS)
Finish all the tasks
ExecuteWithHookContext.run() (HiveConf.ConfVars.POSTEXECHOOKS)
Before returning the result HiveDriverRunHook.postDriverRun() ( HiveConf.ConfVars.HIVE_DRIVER_RUN_HOOKS)
Return the result.
For each of the hooks I indicated the interfaces you have to implement. In the brackets
there's the corresponding conf. prop. key you have to set in order to register the
class at the beginning of the script.
E.g: setting the PreExecution hook (9th stage of the workflow)
HiveConf.ConfVars.PREEXECHOOKS -> hive.exec.pre.hooks :
set hive.exec.pre.hooks=com.example.MyPreHook;
Unfortunately these features aren't really documented, but you can always look into the Driver class to see the evaluation order of the hooks.
Remark: I assumed here Hive 0.11.0, I don't think that the Cloudera distribution
differs (too much)
a good start --> http://dharmeshkakadia.github.io/hive-hook/
there are examples...
note: hive cli from console show the messages if you execute from hue, add a logger and you can see the results in hiveserver2 log role.
I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.