While using parallel pipeline getting exit code 3 in shell script.
stage('Core:Restart')
{ agent { label "master" }
steps {
wrap([$class: 'MaskPasswordsBuildWrapper', varPasswordPairs: [[password: "${password}"]]])
{ sh label: '',
script: '''#!/bin/bash
if [[ -z ${CORE_BRANCH} ]] then
echo "You have entered empty branch for CORE Module"
else
for ip in ${CORE_SERVER_IP//,/ }; do
echo "$ip"
sshpass -p $password ssh -o StrictHostKeyChecking=no -tt ${USER}#${ip} <<EOF
set -e
sudo -i
set -e
/etc/init.d/tomcat status
exit 0
exit 0
EOF
done
fi'''
}
}
}
sharing below snapshot as additional reference.
Jenkins option
Related
We are in a situation where we got to check the status of the process over SSH and exit the ssh and script if the process is running and trigger an email if its not running
Below is the snippet of current script which calls status_email function in both the cases.
I am looking for your help to invoke the script only when the process is not running
status_email{
******
****
}
for hostname in `cat ai_hosts.txt`; do
ssh actional#"$hostname" /bin/sh << 'EOF'
pid=`ps -ef | grep <<Process_Details>> | grep -v grep | awk -F ' ' {print'$2'}`
if [ "${pid:-null}" = null ]; then
echo "not running"
else
echo "running"
exit
fi
EOF
status_email;
You can store output of ssh in a variable, then check the variable and depending on its value do something. For example if you save the following code in remote-process-check.sh
#!/bin/sh
host=your-host-name-or-ip-address
status_email () {
echo 'Status email was sent.';
}
output=$(ssh root#"$host" "ps -C $1 >/dev/null && echo 'Running' || echo 'Not running'")
if [ "$output" = 'Not running' ]; then
status_email
fi
You can use it like:
bash remote-process-check.sh process-name
bash remote-process-check.sh mysqld
bash remote-process-check.sh apache2
and if the process is not running status_email function will be invoked and you will see "Status email was sent." in your console.
The interesting part of the script is:
output=$(ssh root#"$host" "ps -C $1 >/dev/null && echo 'Running' || echo 'Not running'")
where with ps -C $1 you are checking if process is running, with >/dev/null you are redirecting the output to the black hole and if command is successful, meaning that the process is running echo 'Running' is executed otherwise echo 'Not running', which is stored in the variable output
I'm trying to capture a live tcpdump from a custom designed linux system. The command I'm using so far is:
plink.exe -ssh user#IP -pw PW "shell nstcpdump.sh -s0 -U -w - not port 22 and not host 127.0.0.1" | "C:\Program Files\Wireshark\wireshark" -i -
This will fail since when executing the command on the remote system, this (custom) shell will output "Done" before sending data. I tried to find out a way to remove the "Done" message from the shell but doesn't appear to be any.
So I came up with this (added findstr -V):
plink.exe -ssh user#IP -pw PW "shell nstcpdump.sh -s0 -U -w - not port 22 and not host 127.0.0.1" | findstr -V "Done" | "C:\Program Files\Wireshark\wireshark" -i -
This works more or less fine since I will get some errors and the live capture will stop. I believe it might have something to do with the buffers, but I'm not sure.
Does anyone know of any other method of removing/bypassing the first bytes/chars of the output from plink/remote shell?
[edit]
as asked, nstcpdump.sh is a wrapper for tcpdump. as mentioned before, this system is highly customized. nstcpdump.sh code:
root#hostname# cat /netscaler/nstcpdump.sh
#!/bin/sh
# piping the packet trace to the tcpdump
#
# FILE: $Id: //depot/main/rs_120_56_14_RTM/usr.src/netscaler/scripts/nstcpdump.sh#1 $
# LAST CHECKIN: $Author: build $
# $DateTime: 2017/11/30 02:14:38 $
#
#
# Options: any TCPDUMP options
#
TCPDUMP_PIPE=/var/tmp/tcpdump_pipe
NETSCALER=${NETSCALER:-`cat /var/run/.NETSCALER`}
[ -r ${NETSCALER}/netscaler.conf ] && . ${NETSCALER}/netscaler.conf
TIME=${TIME:-3600}
MODE=${MODE:-6}
STARTCMD="start nstrace -size 0 -traceformat PCAP -merge ONTHEFLY -filetype PIPE -skipLocalSSH ENABLED"
STOPCMD="stop nstrace "
SHOWCMD="show nstrace "
NSCLI_FILE_EXEC=/netscaler/nscli
NSTRACE_OUT_FILE=/tmp/nstrace.out
NS_STARTTRACE_PIDFILE=/tmp/nstcpdump.pid
TRACESTATE=$(nsapimgr -d allvariables | grep tracestate | awk '{ print $2}')
trap nstcpdump_exit 1 2 15
nstcpdump_init()
{
echo "##### WARNING #####"
echo "This command has been deprecated."
echo "Please use 'start nstrace...' command from CLI to capture nstrace."
echo "trace will now start with all default options"
echo "###################"
if [ ! -d $NSTRACE_DIR ]
then
echo "$NSTRACE_DIR directory doesn't exist."
echo "Possible reason: partition is not mounted."
echo "Check partitions using mount program and try again."
exit 1
fi
if [ ! -x $NSCLI_FILE_EXEC ]
then
echo "$NSCLI_FILE_EXEC binary doesn't exist"
exit 1
fi
if [ -e $NSTRACE_OUT_FILE ]
then
rm $NSTRACE_OUT_FILE
echo "" >> $NSTRACE_OUT_FILE
fi
}
nstcpdump_start_petrace()
{
sleep 0.5;
$NSCLI_FILE_EXEC -U %%:.:. $STARTCMD >/tmp/nstcpdump.sh.out
rm -f ${NS_STARTTRACE_PIDFILE}
}
nstcpdump_start()
{
# exit if trace is already running
if [ $TRACESTATE -ne 0 ]
then
echo "Error: one instance of nstrace is already running"
exit 2
fi
nstcpdump_start_petrace &
echo $! > ${NS_STARTTRACE_PIDFILE}
tcpdump -n -r - $TCPDUMPOPTIONS < ${TCPDUMP_PIPE}
nstcpdump_exit
exit 1
}
nstcpdump_exit()
{
if [ -f ${NS_STARTTRACE_PIDFILE} ]
then
kill `cat ${NS_STARTTRACE_PIDFILE}`
rm ${NS_STARTTRACE_PIDFILE}
fi
$NSCLI_FILE_EXEC -U %%:.:. $STOPCMD >> /dev/null
exit 1
}
nstcpdump_usage()
{
echo `basename $0`: utility to view/save/sniff LIVE packet capture on NETSCALER box
tcpdump -h
echo
echo NOTE: tcpdump options -i, -r and -F are NOT SUPPORTED by this utility
exit 0
}
########################################################################
while [ $# -gt 0 ]
do
case "$1" in
-h )
nstcpdump_usage
;;
-i )
nstcpdump_usage
;;
-r )
nstcpdump_usage
;;
-F )
nstcpdump_usage
;;
esac
break;
done
TCPDUMPOPTIONS="$#"
check_ns nstcpdump
#nstcpdump_init
#set -e
if [ ! -e ${TCPDUMP_PIPE} ]
then
mkfifo $TCPDUMP_PIPE
if [ $? -ne 0 ]
then
echo "Failed creating pipe [$TCPDUMP_PIPE]"
exit 1;
fi
fi
nstcpdump_start
Regards
I'm working on a script to automate the creation of a .gitconfig file.
This is my main script that calls a function which in turn execute another file.
dotfile.sh
COMMAND_NAME=$1
shift
ARG_NAME=$#
set +a
fail() {
echo "";
printf "\r[${RED}FAIL${RESET}] $1\n";
echo "";
exit 1;
}
set -a
sub_setup() {
info "This may overwrite existing files in your computer. Are you sure? (y/n)";
read -p "" -n 1;
echo "";
if [[ $REPLY =~ ^[Yy]$ ]]; then
for ARG in $ARG_NAME; do
local SCRIPT="~/dotfiles/setup/${ARG}.sh";
[ -f "$SCRIPT" ] && echo "Applying '$ARG'" && . "$SCRIPT" || fail "Unable to find script '$ARG'";
done;
fi;
}
case $COMMAND_NAME in
"" | "-h" | "--help")
sub_help;
;;
*)
CMD=${COMMAND_NAME/*-/}
sub_${CMD} $ARG_NAME 2> /dev/null;
if [ $? = 127 ]; then
fail "'$CMD' is not a known command or has errors.";
fi;
;;
esac;
git.sh
git_config() {
if [ ! -f "~/dotfiles/git/gitconfig_template" ]; then
fail "No gitconfig_template file found in ~/dotfiles/git/";
elif [ -f "~/dotfiles/.gitconfig" ]; then
fail ".gitconfig already exists. Delete the file and retry.";
else
echo "Setting up .gitconfig";
GIT_CREDENTIAL="cache"
[ "$(uname -s)" == "Darwin" ] && GIT_CREDENTIAL="osxkeychain";
user " - What is your GitHub author name?";
read -e GIT_AUTHORNAME;
user " - What is your GitHub author email?";
read -e GIT_AUTHOREMAIL;
user " - What is your GitHub username?";
read -e GIT_USERNAME;
if sed -e "s/AUTHORNAME/$GIT_AUTHORNAME/g" \
-e "s/AUTHOREMAIL/$GIT_AUTHOREMAIL/g" \
-e "s/USERNAME/$GIT_USERNAME/g" \
-e "s/GIT_CREDENTIAL_HELPER/$GIT_CREDENTIAL/g" \
"~/dotfiles/git/gitconfig_template" > "~/dotfiles/.gitconfig"; then
success ".gitconfig has been setup";
else
fail ".gitconfig has not been setup";
fi;
fi;
}
git_config
In the console
$ ./dotfile.sh --setup git
[ ?? ] This may overwrite existing files in your computer. Are you sure? (y/n)
y
Applying 'git'
Setting up .gitconfig
[ .. ] - What is your GitHub author name?
Then I cannot see what I'm typing...
At the bottom of dotfile.sh, I redirect any error that occurs during my function call to /dev/null. But I should normally see what I'm typing. If I remove 2> /dev/null from this line sub_${CMD} $ARG_NAME 2> /dev/null;, it works!! But I don't understand why.
I need this line to prevent my script to echo an error in case my command doesn't exists. I only want my own message.
e.g.
$ ./dotfile --blahblah
./dotfiles: line 153: sub_blahblah: command not found
[FAIL] 'blahblah' is not a known command or has errors
I really don't understand why the input in my sub script is redirected to /dev/null as I mentioned only stderr to be redirected to /dev/null.
Thanks
Do you need the -e option in your read statements?
I did a quick test in an interactive shell. The following command does not echo characters :
read -e TEST 2>/dev/null
The following does echo the characters
read TEST 2>/dev/null
In a loop in shell script, I am connecting to various servers and running some commands. For example
#!/bin/bash
FILENAME=$1
cat $FILENAME | while read HOST
do
0</dev/null ssh $HOST 'echo password| sudo -S
echo $HOST
echo $?
pwd
echo $?'
done
Here I am running "echo $HOST" and "pwd" commands and I am getting exit status via "echo $?".
My question is that I want to be able to store the exit status of the commands I run remotely in some variable and then ( based on if the command was success or not) , write a log to a local file.
Any help and code is appreciated.
ssh will exit with the exit code of the remote command. For example:
$ ssh localhost exit 10
$ echo $?
10
So after your ssh command exits, you can simply check $?. You need to make sure that you don't mask your return value. For example, your ssh command finishes up with:
echo $?
This will always return 0. What you probably want is something more like this:
while read HOST; do
echo $HOST
if ssh $HOST 'somecommand' < /dev/null; then
echo SUCCESS
else
echo FAIL
done
You could also write it like this:
while read HOST; do
echo $HOST
if ssh $HOST 'somecommand' < /dev/null
if [ $? -eq 0 ]; then
echo SUCCESS
else
echo FAIL
done
You can assign the exit status to a variable as simple as doing:
variable=$?
Right after the command you are trying to inspect. Do not echo $? before or the new value of $? will be the exit code of echo (usually 0).
An interesting approach would be to retrieve the whole output of each ssh command set in a local variable using backticks, or even seperate with a special charachter (for simplicity say ":") something like:
export MYVAR=`ssh $HOST 'echo -n ${HOSTNAME}\:;pwd'`
after this you can use awk to split MYVAR into your results and continue bash testing.
Perhaps prepare the log file on the other side and pipe it to stdout, like this:
ssh -n user#example.com 'x() { local ret; "$#" >&2; ret=$?; echo "[`date +%Y%m%d-%H%M%S` $ret] $*"; return $ret; };
x true
x false
x sh -c "exit 77";' > local-logfile
Basically just prefix everything on the remote you want to invoke with this x wrapper. It works for conditionals, too, as it does not alter the exit code of a command.
You can easily loop this command.
This example writes into the log something like:
[20141218-174611 0] true
[20141218-174611 1] false
[20141218-174611 77] sh -c exit 77
Of course you can make it better parsable or adapt it to your whishes how the logfile shall look like. Note that the uncatched normal stdout of the remote programs is written to stderr (see the redirection in x()).
If you need a recipe to catch and prepare output of a command for the logfile, here is a copy of such a catcher from https://gist.github.com/hilbix/c53d525f113df77e323d - but yes, this is a bit bigger boilerplate to "Run something in current context of shell, postprocessing stdout+stderr without disturbing return code":
# Redirect lines of stdin/stdout to some other function
# outfn and errfn get following arguments
# "cmd args.." "one line full of output"
: catch outfn errfn cmd args..
catch()
{
local ret o1 o2 tmp
tmp=$(mktemp "catch_XXXXXXX.tmp")
mkfifo "$tmp.out"
mkfifo "$tmp.err"
pipestdinto "$1" "${*:3}" <"$tmp.out" &
o1=$!
pipestdinto "$2" "${*:3}" <"$tmp.err" &
o2=$!
"${#:3}" >"$tmp.out" 2>"$tmp.err"
ret=$?
rm -f "$tmp.out" "$tmp.err" "$tmp"
wait $o1
wait $o2
return $ret
}
: pipestdinto cmd args..
pipestdinto()
{
local x
while read -r x; do "$#" "$x" </dev/null; done
}
STAMP()
{
date +%Y%m%d-%H%M%S
}
# example output function
NOTE()
{
echo "NOTE `STAMP`: $*"
}
ERR()
{
echo "ERR `STAMP`: $*" >&2
}
catch_example()
{
# Example use
catch NOTE ERR find /proc -ls
}
See the second last line for an example (scroll down)
I'm using expect to establish a persistent ssh connection
set stb_ip [lindex $argv 0]
spawn -noecho ssh -o ControlMaster=auto -o ControlPath=/tmp/ssh-master-%r#%h:%p -o ConnectTimeout=1 -O exit root#$stb_ip
spawn -noecho ssh -fN -o ControlMaster=yes -o ControlPath=/tmp/ssh-master-%r#%h:%p -o ControlPersist=360 -o ConnectTimeout=1 root#$stb_ip
expect {
-re ".*password:" {send "\r"; interact}
}
Unfortunately I can't manage to put this into background, I triend expect_background, fork+disconect but no luck.
Even triend running this from another script with
excpect -f script.ex param1 param2 &
but with no luck. Any help ?
Heres a proc you can use to login and then interact. I have not tried it with all the ssh Options but I don't see any reason it would not work. Since I use the 8.6 command "try" this is for 8.6 tcl only but you can modify the try to use catch for earlier versions pretty easily.
#!/bin/sh
# the next line restarts using wish \
exec /opt/usr8.6b.5/bin/tclsh8.6 "$0" ${1+"$#"}
if { [ catch {package require Expect } err ] != 0 } {
puts stderr "Unable to find package Expect ... adjust your auto_path!";
}
proc login { user password cmdline } {
set pid [spawn -noecho {*}$cmdline ]
set bad 0;
set done 0;
exp_internal 0; # set to one for extensive debug
log_user 0; # set to one to watch action
set timeout 10
set passwdcount 0
set errMsg {}
# regexp to match prompt after successfull login you may need to change
set intialpromptregexp {^.*[\$\#>]}
expect {
-i $spawn_id
-re $intialpromptregexp {
send_user $expect_out(0,string);
set done 1
}
-re {.*assword:} {
if { $passwdcount >= 1 } {
lappend errMsg "Invalid username or password for user $user"
set bad 1
} else {
exp_send -i $spawn_id "$password\r"
incr passwdcount
exp_continue;
}
}
-re {.*Host key verification failed.} {
lappend errMsg "Host key verification failed."
set bad 1
}
-re {.*onnection refused} {
lappend errMsg "Connection Refused"
set bad 1
}
-re {.*onnection closed by remote host} {
lappend errMsg "Connection Refused"
set bad 1
}
-re {.*Could not resolve hostname (.*): Name or service not known} {
lappend errMsg "Host invalid: Could not resolve hostname in $cmdline : Name or service not known"
set bad 1
}
-re {\(yes/no\)\?} {
exp_send -i $spawn_id "yes\r"
exp_continue;
}
timeout {
lappend errMsg "timeout \[[expr { [clock seconds] - $start } ]\]"
set bad 1
}
fullbuffer {
lappend errMsg " buffer is full"
exp_continue;
}
eof {
puts "Eof detected "
set bad 1
set done 1 ;
}
}
if { $bad } {
throw CONNECTION_ERROR [join $errMsg \n ]
}
return $spawn_id
}
# get login information in somehow in this case from command line
set user [lindex $argv 0]
set passwd [lindex $argv 1]
set host [lindex $argv 2 ]
try {
set spawn_id [login $user $passwd "ssh -X $user#$host" ]
} trap CONNECTION_ERROR a {
puts "CONNECTION ERROR: $a"
exit 1
}
interact
set exitstatus [ exp_wait -i $spawn_id ];
catch { exp_close -i $spawn_id };
# more clean up here if you want
Assuming your script works in the "foreground"...
nohup expect -f script.ex param1 param2 &
Here's a script I made a long time ago. It does what you want but doesn't use Expect (which I loathe). I don't use it any more, I can't guarantee that it even still works but it should get you going.
#!/bin/sh
#
# Persistent ssh: Automatically create persistent ssh connections using OpenSSH 4.0
[ -z "$USER" ] && USER=`whoami`
MASTERSOCKDIR="/tmp/pssh-$USER"
MASTERSOCK="$MASTERSOCKDIR/%r-%h-%p"
# Check if master is running
output=`ssh -o ControlPath="$MASTERSOCK" -O check "$#" 2>&1`
if [ $? -ne 0 ]; then
case "$output" in
Control*)
# Master not running, SSH supports master
# Figure out socket filename
socket=`echo "$output" | sed -n -e 's/[^(]*(\([^)]*\)).*/\1/p' -e '1q'`
# Clean old socket if valid filename
case "$socket" in
"$MASTERSOCKDIR"/*) rm -f "$socket" >/dev/null 2>&1 ;;
esac
# Start persistent master connection
if [ ! -d "$MASTERSOCKDIR" ]; then
mkdir "$MASTERSOCKDIR"
chmod 700 "$MASTERSOCKDIR"
fi
ssh -o ControlPath="$MASTERSOCK" -MNf "$#"
if [ $? -ne 0 ]; then
echo "$0: Can't create master SSH connection, falling back to regular SSH" >&2
fi
;;
*)
# SSH doesn't support master or bad command line parameters
ERRCODE=$?
echo "$output" >&2
echo "$0: SSH doesn't support persistent connections or bad parameters" >&2
exit $ERRCODE
;;
esac
fi
exec ssh -o ControlPath="$MASTERSOCK" -o ControlMaster=no "$#"
To execute an expect script in the background use expect eof at the end of your expect script. In case you have defined interact remove it from your script.
Changed script of OP
set stb_ip [lindex $argv 0]
spawn -noecho ssh -o ControlMaster=auto -o ControlPath=/tmp/ssh-master-%r#%h:%p -o ConnectTimeout=1 -O exit root#$stb_ip
spawn -noecho ssh -fN -o ControlMaster=yes -o ControlPath=/tmp/ssh-master-%r#%h:%p -o ControlPersist=360 -o ConnectTimeout=1 root#$stb_ip
expect {
-re ".*password:" {send "\r"; interact}
}
expect eof
An other example [1].
#!/usr/bin/expect -f
set host "host"
set password "password"
spawn ssh $host
expect {
"(yes/no)?" {
send -- "yes\r"
exp_continue
}
"*password:*" {
send -- "$password\r"
}
}
##Removing this:
#interact
##And adding this:
expect eof
exit