Redirect task log of Spring Cloud Data Flow - spring-boot

Is there any way to locate, copy, or manipulate logs of task execution, in SCDF, in local?
I'm currently seeing logs whenever I execute batch (or not batch) task in cmdline of shell where I've started dataflow server locally. In both CentOS 7 and Windows 10, it says that it located their stdout/stderr logs inside
/tmp (temp in windows)/[SOME_NUMBER_I_DON'T_KNOW]/${task_name_i_defined}_${SOME_HEX_CODE_RELATED_TO_TASK_EXECUTION_ID}
I want to use that information whenever I need.
Passing properties to dataflow jar doesn't work. It just creates a file, writes that file over and over at each task execution, unlike storing each task execution at different folder.
Modifying properties like loggig.file.path at task lauching configurations doesn't work, either. Only stdout of task is made with the name of 'spring.log', at specific location i designated. Behavior is same as above case.
Spring Cloud Data Flow Task logs
I looked at this answer, but it does not work, either...
I know there are a lot of parameters that I could pass to dataflow or tasks. I don't think none of them could satisfy this condition. Please enlighten me.

The only configuration property available to effect the log location is the working-directories-root deployer property.
Because it is a deployer property, it can not simply be set as spring.cloud.deployer.local.working-directories-root.
It can set at task launch time and prefixed w/ deployer.*.local (details).
It can also be configured globally via the "task platform" properties (details).
When configured at the platform level, it can be done in yml:
spring:
cloud:
dataflow:
task:
platform:
local:
accounts:
default:
working-directories-root: /Users/foo/logz
or via an env var:
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_WORKING_DIRECTORIES_ROOT=/Users/foo/logz
Details
The STDOUT log location is created at <task-work-dir>/stdout.log (details).
The <task-work-dir> is defined as:
<working-directories-root> / System.nanoTime() / <taskLaunchId>
(details)
The <working-directories-root> is the value of the
working-directories-root local deployer property or the "java.io.tmpdir" system property when the local deployer property is not specified.

Related

JOOQ LoggerListener extensive DEBUG logging

I need, for performance reasons, get rid of org.jooq.tools.LoggerListener DEBUG log messages in Spring Boot application running inside Docker. None of the Spring Boot options like (Docker) env variable LOGGING_LEVEL_ORG_JOOQ=INFO in docker-compose.yml or Java system property -Dlogging.level.org.jooq=INFO passed to docker container in entry.sh do not remove these DEBUG messages reporting query execution details. Both option have been checked at Docker container level.
Even custom logback-perf.xml conf file, as in https://github.com/jOOQ/jOOQ/blob/master/jOOQ-examples/jOOQ-spring-boot-example/src/main/resources/logback.xml with DEBUG->INFO, pointed by LOGGING_CONFIG env var from docker-compose.yml does not prevent these debug messages. I have verified that the custom logback-perf.xml conf file is in use by changing appender patterns.
The best way to remove those messages in jOOQ directly, is to specify Settings.executeLogging = false, see here.
Obviously, there are also ways to set up loggers correctly, but I cannot see what you did from your description, or why that failed.

Why does ApplicationStart timeout with AWS code deploy?

I am using codedeploy to deploy a springboot app to an ec2. But I keep getting a script timeout error. I event set the timeout to 60 seconds event tho the application always starts up within 20 seconds. The application starts up fine. I run top on the linux instance and see the java process started up. I can then use postman to hit the http status check endpoint and confirm that it has started up successfully. But this is what it looks like in the code deploy console:
The appspec.yml file looks like this
The server_start.sh file looks like this.
Why is this happening? Thanks.
I think this has more to do with how Linux process works than with Code Build. I'm far from being a specialist on that, but according to with AWS documentation, there is a certain way you must use to start your long-running processes, as a Java application
The syntax is:
#!/bin/bash
/tmp/sleep.sh > /dev/null 2> /dev/null < /dev/null &
Replace the sleep by your Java command.
More details here
You should put some some codes of your script to the BeforeInstall or AfterInstall.
remove java -jar application.jar
BeforeInstall – You can use this deployment lifecycle event for preinstall tasks, such as decrypting files and creating a backup of the current version.
Install – During this deployment lifecycle event, the CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the CodeDeploy agent and cannot be used to run scripts.
AfterInstall – You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions.
ApplicationStart – You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop.
Then create another bash script for your ApplicationStart. Put the line your removed earlier on this script.

How to modify the default `--logdest eventlog` for the Puppet Agent service on Windows?

I'm running Puppet Agent as a service on Windows but I'm unable to find in the docs how to modify the default behaviour --logdest eventlog to --logdest <FILE>. I want to have agent logs stored in a file, and not in the Windows Event Log, or better - if that's possible - have them sent back to the Puppet Master.
You can add the --logdest to the 'ImagePath' value located in this registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\pe-puppet
We add the switch through puppet code after the agent is installed, meaning that the first run output goes to the event log, but all the subsequent are sent to the local file. You can also modify the reg key during install through a powershell script.

Invoke a shell script execution using nagios

Hi all I am having a script which restarts all the components(.jar files) present in the server (/scripts/startAll.sh). So whenever my server goes down, I want to invoke the execution of the script using nagios, which is running on different linux server. is it possible to do so? kindly help on How to invoke execution of this script using nagios?
Event Handlers
Nagios and Naemon allow executing custom scripts, both for hosts and for services entering a 'problem state.' Since your implementation is for restarting specific applications, yours will most likely need to be service event handlers.
From Nagios Documentation:
Event handlers can be enabled or disabled on a program-wide basis by
using the enable_event_handlers in your main configuration file.
Host- and service-specific event handlers can be enabled or disabled
by using the event_handler_enabled directive in your host and service
definitions. Host- and service-specific event handlers will not be
executed if the global enable_event_handlers option is disabled.
Enabling and Creating Event Handler Commands for a Service or Host
First, enable event handlers by modifying or adding the following line to your Nagios config file.
[IE: /usr/local/nagios/etc/nagios.cfg]:
enable_event_handlers=1
Define and enable an event handler on the service failure(s) that will trigger the script. Do so by adding two event_handler directives inside of the service you've already defined.
[IE: /usr/local/nagios/etc/services.cfg]:
define service{
host_name my-server
service_description my-check
check_command my-check-command!arg1!arg2!etc
....
event_handler my-eventhandler
event_handler_enabled 1
}
The last step is to create the event_handler command named in step 2, and point it to a script you've already created. There are a few approaches to this (SSH, NRPE, Locally-Hosted, Remotely Hosted). I'll use the simplest method, hosting a BASH script on the monitor system that will connect via SSH and execute:
[IE: /usr/local/nagios/etc/objects/commands.cfg]:
define command{
command_name my-eventhandler
command_line /usr/local/nagios/libexec/eventhandlers/my-eventhandler.sh
}
In this example, the script "my-eventhandler.sh" should use SSH to connect to the remote system, and execute the commands you've decided on.
NOTE: This is only intended as a quick, working solution for one box in your environment. In practice, it is better to create an event handler script remotely, and to use an agent such as NRPE to execute the command while passing a $HOSTNAME$ variable (thus allowing the solution to scale across more than one system). The simplest tutorial I've found for using NRPE to execute an event handler can be found here.
You can run shell scripts on remote hosts by snmpd using check_by_snmp.pl
Take a view to https://exchange.nagios.org/directory/Plugins/*-Remote-Check-Tunneling/check_by_snmp--2F-check_snmp_extend--2F-check_snmp_exec/details
This is a very useful plugin for nagios. I work with this a lot.
Good luck!!

Why is my spring.datasource configuration not being picked up as expected

I have a batch job which runs perfectly well in standalone mode. I converted the same to a spring xd batch job. I am using spring xd version 1.0.0.M5.
Some issues I face:
(i) I do not want to use hsqldb as my spring.datasource. I wanted to switch to mysql. In order to do so I updated the xd-config.yml file to reflect the same. It did not work. I added a snippet (application.yml) to my job config folder with the relevant datasource information did not work.
I set the spring.datasource related environment variables on the command line. It works.
Q: Is there a way to have mysql be picked as the profile such that the relevant metadata is picked either from the application.yml snippet or the xd-config.yml snippet without me having to set the environment variable manually?
The database configuration is still a work-in-progress. The goal for M6 is to have what you specify in xd-config.yml to control both the Spring Batch repository tables and the default for your batch jobs using JDBC.
In M5 there are separate settings to control this. The Spring Batch repository uses what is in config/xd-config.yml while the batch jobs you launch depend on config/batch-jdbc.properties. To use MySQL for both I changed:
config/xd-config.yml
#Config for use with MySQL - uncomment and edit with relevant values for your environment
spring:
datasource:
url: jdbc:mysql://localhost:3306/xd
username: spring
password: password
driverClassName: com.mysql.jdbc.Driver
profiles:
active: default,mysql
config/batch-jdbc.properties
# Setting for the JDBC batch import job module
url=jdbc:mysql://localhost:3306/xd
username=spring
password=password
driverClass=com.mysql.jdbc.Driver
# Whether to initialize the database on job creation, and the script to
# run to do so if initializeDatabase is true.
initializeDatabase=false
initializerScript=init_batch_import.sql

Resources