How to save start log and end log when using Integration Service IIB? - ibm-integration-bus

I'm deploying a project with IIB.
The good feature is Integration Serivce, but I dont know how to save log before and after each operation.
So can any one know how to resolve that ?
Tks !

There are three ways in my project. Refer to the following.
Code Level
1.JavaComputeNode (Using log4j )
Flow Level
1.TraceNode
2.Message Flow Monitoring

In addition to the other answers there is one more option, which I often use: The IAM3 SupportPac
It adds a log4j-Node and also provides the possibility to log from esql and java compute nodes.

There are two ways of doing this:
You can use Log Node to create audit logging. This option only store in files and the files are not rotatives
You can use the IBM Integrated Monitor these events to create a external flow that intercepts messages and store this message in the way you prefer

Related

Disabling specific context path in azure app insights

I have a spring-boot application that I have used alongside azure app insights and it works as expected.
Had a question on this: Is it possible to exclude specific context paths? I have just dropped in the starter and just configured the instrumentation key.
Are there any specific settings on application.properties that can fix this? Or do I need to create my own instance of TelemetryClient?
I am assuming that you don't want to store some event in AI instance , If yes then you can use sampling feature to suppress some of the events.
You can set sampling in the Java SDK in the ApplicationInsights.xml file.
You can Include or Exclude specific types of telemetry from Sampling using the following tags inside the Processor tag "FixedRateSamplingTelemetryProcessor"
<ExcludedTypes>
<ExcludedType>Request</ExcludedType>
</ExcludedTypes>
<IncludedTypes>
<IncludedType>Exception</IncludedType>
</IncludedTypes>
You can read more about sampling here.
Hope it helps.

Logging for Talend job running within spring-boot

We have talend-jobs triggered within Spring-boot application. Is there any way to configure the output of talend-jobs to the application log files?
One workaround we find is to write logs directly to an external file (filePath passed as context-param). But wanted to find if there is a better way to configure this seamlessly.
Not sure if I understood the question correctly, but I guess your concerns might be on what might have happened to the triggered Jobs.
Logging
With Respect to Logging for Talend, You could configure using Log4j,
https://help.talend.com/reader/5DC~TBhDsBie5JTXyVLW4g/QSGCZJKXo~uhKvZDq1DxUg
Monitoring
Regarding the Status of the Job Executed, you could get the execution details retrieved using REST Call(Talend Metaservlet API).
getTaskExecutionStatus
https://help.talend.com/reader/oYf9gKhmYrkWCiSua4qLeg/SLiAyHyDTjuznLR_F~MiQQ
By Modifying the Existing Talend Job,You could also design a like a feedback loop, ie Trigger a REST Call back to your application. With the details of Execution from Talend Job.

Daily rolling log file in Websphere Liberty (16.0.0.4-WS-LIBERTY-CORE)

How to create a daily rolling log file in Websphere Liberty? I want the name of the log file to have YYYYMMDD format.
Currently I'm only able to limit the max file size, max file and a static naming of messages.log and disable consolelog.
<logging consoleLogLevel="OFF" maxFileSize="1" maxFiles="3" messageFileName="loggingMessages.log"/>
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
WebSphere Liberty does not currently have the ability to schedule log file rotation like traditional WAS. You can request this feature using the RFE site.
Alternatively, you could use an approach like Bruce mentioned - perhaps using a cron job to restart the server at midnight.
You might also consider configuring Liberty's binary logging. This will create a binary log file that can be queried to produce actual log files (with filtering options, etc.). It does have some time-based options. More info here.
Hope this helps, Andy
Probably not the answer you want, but if you restart the server it will roll the log.

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

Diagnostic Monitor Trace Listener

I would like to know if it is possible to modify the way the Trace is recording trace information ?
Trace.Listeners.Add(new DiagnosticMonitorTraceListener());
Trace.TraceInformation("OnStart");
I would like to be able to use the current WADLogsTable and adding one or more custom columns to the table.
Right now the default table created by the DiagnosticMonitorConfiguration looks like that:
PartitionKey|RowKey|Timestamp|EventTickCount|DeploymentID|Role|RoleInstance|Level|EventID|Pid|TiD|Message|
I would like to add at the end some custom columns like :
PartitionKey|RowKey|Timestamp|EventTickCount|DeploymentID|Role|RoleInstance|Level|EventID|Pid|TiD|Message|Custom1|Custom2
So every time I trace something I am able to add data for those two custom columns
Thanks
I don't think you'll be able to do this. While Windows Azure Diagnostics is quite extensible, you'll not be able to modify schema for trace logging. I would recommend looking into implementing custom diagnostics. You may find this link useful for this: http://convective.wordpress.com/2009/12/08/custom-diagnostics-in-windows-azure/.
As Gaurav mentioned, this is not doable with default implementation of Trace.
I'd recommend using something like Log4Net and implementing a custom Table Storage appender. I've done this on a number of projects and it works wonderfully. It (Log4Net) can also consume regular Trace messages and log them into its storage

Resources