Invoke a shell script execution using nagios - shell

Hi all I am having a script which restarts all the components(.jar files) present in the server (/scripts/startAll.sh). So whenever my server goes down, I want to invoke the execution of the script using nagios, which is running on different linux server. is it possible to do so? kindly help on How to invoke execution of this script using nagios?

Event Handlers
Nagios and Naemon allow executing custom scripts, both for hosts and for services entering a 'problem state.' Since your implementation is for restarting specific applications, yours will most likely need to be service event handlers.
From Nagios Documentation:
Event handlers can be enabled or disabled on a program-wide basis by
using the enable_event_handlers in your main configuration file.
Host- and service-specific event handlers can be enabled or disabled
by using the event_handler_enabled directive in your host and service
definitions. Host- and service-specific event handlers will not be
executed if the global enable_event_handlers option is disabled.
Enabling and Creating Event Handler Commands for a Service or Host
First, enable event handlers by modifying or adding the following line to your Nagios config file.
[IE: /usr/local/nagios/etc/nagios.cfg]:
enable_event_handlers=1
Define and enable an event handler on the service failure(s) that will trigger the script. Do so by adding two event_handler directives inside of the service you've already defined.
[IE: /usr/local/nagios/etc/services.cfg]:
define service{
host_name my-server
service_description my-check
check_command my-check-command!arg1!arg2!etc
....
event_handler my-eventhandler
event_handler_enabled 1
}
The last step is to create the event_handler command named in step 2, and point it to a script you've already created. There are a few approaches to this (SSH, NRPE, Locally-Hosted, Remotely Hosted). I'll use the simplest method, hosting a BASH script on the monitor system that will connect via SSH and execute:
[IE: /usr/local/nagios/etc/objects/commands.cfg]:
define command{
command_name my-eventhandler
command_line /usr/local/nagios/libexec/eventhandlers/my-eventhandler.sh
}
In this example, the script "my-eventhandler.sh" should use SSH to connect to the remote system, and execute the commands you've decided on.
NOTE: This is only intended as a quick, working solution for one box in your environment. In practice, it is better to create an event handler script remotely, and to use an agent such as NRPE to execute the command while passing a $HOSTNAME$ variable (thus allowing the solution to scale across more than one system). The simplest tutorial I've found for using NRPE to execute an event handler can be found here.

You can run shell scripts on remote hosts by snmpd using check_by_snmp.pl
Take a view to https://exchange.nagios.org/directory/Plugins/*-Remote-Check-Tunneling/check_by_snmp--2F-check_snmp_extend--2F-check_snmp_exec/details
This is a very useful plugin for nagios. I work with this a lot.
Good luck!!

Related

Restart service with custom path to init.d file

I want to restart a service via init.d file on AIX. Ansibles service and sysvinit didn't work. How to control those services using Ansible.
I know I could run a shell command but maybe there is a builtin solution.
This is, what I would do on a shell:
/etc/rc.d/init.d/nrpe restart
From the docs of the service builtin:
Controls services on remote hosts. Supported init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.
Basically the service module tries to auto-detect which init system is used and perform the action using that init system. But if your init system does not know about the service (you are running the init script directly, right?) it (the init system) will not be able to restart it.
So you can not use the service module or any other module that tries to interact with your init system, if the init system is not aware of your service.
You should put your init script into the correct directory for your init system to recognize it (then you can also run service nrpe restart) and then use the service module.
If you can not do that for some reason, you will need to use the command or shell module to restart your service.

Launching a Singularity Container Remotely using Visual Studio Code

I am aware that you can launch docker containers remotely in VSCode. Is it possible to do the same with singularity containers?
Update: the solution to this was published in the same issue (https://github.com/microsoft/vscode-remote-release/issues/3066#issuecomment-1019500216) as before by user oschulz:
As promised, here are some instructions on how to use Singularity with VS-Code Remote SSH via SSH RemoteCommand. The procedure described below makes VS-Code run it’s remote server component inside a Singularity container instance (other runtimes like Shifter work too).
Acknowledgement: Credit for a lot of this goes to #gipert, who refined my original approach (using a custom SSH script) when support for RemoteCommand became available in VS-Code recently
Step 1
Use VS-Code >= v1.64 (includes support for the SSH RemoteCommand setting). Install the Pre-Release version of the Remote SSH extension
Important: In the VS-Code settings, set "remote.SSH.enableRemoteCommand": true.
Step 2
In your "$HOME/.ssh/config", add something like
Host myimage1~*
RemoteCommand singularity shell /path/to/image1.sif
RequestTTY yes
Host myimage2~*
RemoteCommand singularity shell /path/to/image2.sif
RequestTTY yes
Host somehost myimage1~somehost myimage2~somehost
HostName some.host.somewhere
User your_username_
Host otherhost myimage1~otherhost myimage2~otherhost
HostName some.otherhost.somewhere
User your_username_
Test whether this works using ssh myimage1~somehost. This should drop you into an SSH session inside of an instance of the "/path/to/image1.sif" container image on some.host.somewhere.
Connecting to the remote host with VS-Code: F1 > "Connect to Host" > "myimage1~somehost” should now get you a remote VS-Code session running in the container image as well. The same for "myimage2~somehost", "myimage1~otherhost" and "myimage2~otherhost".
Step 3
However, since VS-code reuses remote server instance, that's not sufficient to run multiple container images on the same host at the same time. To get separate (per container) VS-Code server instances the same host, add something like this to your VS-Code preferences:
"remote.SSH.serverInstallPath": {
"myimage1~somehost": "~/.vscode-container/myimage1",
"myimage1~otherhost": "~/.vscode-container/myimage1",
"myimage2~somehost": "~/.vscode-container/myimage2",
"myimage2~otherhost": "~/.vscode-container/myimage2"
}
Request to the VS-Code dev team
Could "remote.SSH.serverInstallPath" be controlled via an environment variable? This would allow us to eliminate all these cumbersome "remote.SSH.serverInstallPath" preferences. The environment variable could be set by a container startup script on the remote side (like the one below) automatically, depending on the selected container image.
Other Container runtimes
To use a different container runtime than Singularity (e.g. Shifter, Charliecloud, etc.), simply replace singularity shell /path/to/image1.sif by the appropriate command for your runtime.
On some systems (e.g. with Shifter at NERSC) you may also need to override $XDG_RUNTIME_DIR, since it's default location may not be writable from within a container instance. In such cases, it's best to use a custom container run-script like
#!/bin/sh
export XDG_RUNTIME_DIR="${TMPDIR:-/tmp}/`whoami`/run"
exec shifter --image="$1"
So in your SSH config, use
RemoteCommand /my/homedir/.local/bin/run_container image_name
I maintain a little container start-script called cenv that handles $XDG_RUNTIME_DIR (and quite a bit more, including some default bind-mounts) automatically for both Singularity and Shifter (contributions welcome).
Tips and tricks
If things don't work, try "Kill server on remote" from VS-Code and reconnect.
You can also try starting over from scratch with brute force: Close the VS-Code remote connection. Then, from an external terminal, kill the remote VS-Code server instance:
$ ssh somehost
$ kill -9 -1
(Will kill all processes you own on the remote host.)
Remove the ~/.vscode-server directory.
Old:
I believe this is still not supported. Refer to this issue: https://github.com/microsoft/vscode-remote-release/issues/3066, and there are also some ideas for potential workarounds in the same link.

Why does ApplicationStart timeout with AWS code deploy?

I am using codedeploy to deploy a springboot app to an ec2. But I keep getting a script timeout error. I event set the timeout to 60 seconds event tho the application always starts up within 20 seconds. The application starts up fine. I run top on the linux instance and see the java process started up. I can then use postman to hit the http status check endpoint and confirm that it has started up successfully. But this is what it looks like in the code deploy console:
The appspec.yml file looks like this
The server_start.sh file looks like this.
Why is this happening? Thanks.
I think this has more to do with how Linux process works than with Code Build. I'm far from being a specialist on that, but according to with AWS documentation, there is a certain way you must use to start your long-running processes, as a Java application
The syntax is:
#!/bin/bash
/tmp/sleep.sh > /dev/null 2> /dev/null < /dev/null &
Replace the sleep by your Java command.
More details here
You should put some some codes of your script to the BeforeInstall or AfterInstall.
remove java -jar application.jar
BeforeInstall – You can use this deployment lifecycle event for preinstall tasks, such as decrypting files and creating a backup of the current version.
Install – During this deployment lifecycle event, the CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the CodeDeploy agent and cannot be used to run scripts.
AfterInstall – You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions.
ApplicationStart – You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop.
Then create another bash script for your ApplicationStart. Put the line your removed earlier on this script.

How to modify the default `--logdest eventlog` for the Puppet Agent service on Windows?

I'm running Puppet Agent as a service on Windows but I'm unable to find in the docs how to modify the default behaviour --logdest eventlog to --logdest <FILE>. I want to have agent logs stored in a file, and not in the Windows Event Log, or better - if that's possible - have them sent back to the Puppet Master.
You can add the --logdest to the 'ImagePath' value located in this registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\pe-puppet
We add the switch through puppet code after the agent is installed, meaning that the first run output goes to the event log, but all the subsequent are sent to the local file. You can also modify the reg key during install through a powershell script.

WAS Starting applications with scripting, Which bat or sh file will be called

Start the application:
The following example invokes the startApplication operation on the MBean, providing the application name that you want to start.
Using Jacl:
$AdminControl invoke $appManager startApplication myApplication
Using Jython:
AdminControl.invoke(appManager, 'startApplication', 'myApplication')
I wanna know which bat or which sh file will get invoked as a result of the above script which is invoked implicitly by WAS Integrated Solution Console when anyone try to:
Navigate to http : // IP:PORT /ibm/console/login.do
Applications > Application Types > WebSphere enterprise applications
Highlight/Checkbox/Select Any Enterprise Application fro the list of Enterprise Applications listed.
Press Stop/Start
I was expecting this action to invoke %WAS_HOME%\profiles\AppSrv01\bin\startServer.bat
But I couldn't find the echo messages I put in that file in any log file
This is all implementation details, but the admin console doesn't actually use any scripts. Instead, it uses JMX directly to invoke the same MBean ApplicationManager start/stopApplication operation that the Jacl snippet does.

Resources