IBM i OS/400 QSH - shell script - shell

I've got a startup/stop script provided for Tivoli Workload Scheduler in which it will start/stp[ the TWS services in IBM i.
# CHECK ROOT USER
WHO=`id | cut -f1 -d" "`
if [ "$WHO" = "uid=0(root)" ]
then
su TWSSVC -c "/etc/rc.d/init.d/tebctl-tws_cpa_agent_TWSSVC stop"
exit $?
fi
/etc/rc.d/init.d/tebctl-tws_cpa_agent_TWSSVC stop
exit $?
Problem with this is that, in OS/400 the equivalent of root is QSECOFR, so I've amended the line
if [ "$WHO" = "uid=0(root)" ]
to
if [ "$WHO" = "uid=0(QSECOFR)" ]
then i got an error on the following line:
su TWSSVC -c "/etc/rc.d/init.d/tebctl-tws_cpa_agent_TWSSVC stop"
/TWSSVC/TWS/ShutDownLwa: 001-0019 Error found searching for command su. No such path or directory.
How do I change to script such that, when it is QSECOFR, it will su into TWSSVC and trigger the start/stop script? I'm not very familiar with OS400. I'm triggering this script in qsh environment.

you can try the following;
sudo TWSSVC -c "/etc/rc.d/init.d/tebctl-tws_cpa_agent_TWSSVC stop"
Tell me how it goes.

Related

How to grep the output of a command inside a shell script when scheduling using cron

I have a simple shell script where I need to check if my EMR job is running or not and I am just printing a log but it does not seem to work properly when scheduling the script using cron as it always prints the if block statement because the value of "status_live" var is always empty so if anyone can suggest what is wrong here otherwise on manually running the script it works properly.
#!/bin/sh
status_live=$(yarn application -list | grep -i "Streaming App")
if [ -z $status_live ]
then
echo "Running spark streaming job again at: "$(date) &
else
echo "Spark Streaming job is running, at: "$(date)
fi
Your script cannot run in cron because cron script has no environment context at all.
For example try to run your script as another use nobody that has no shell.
sudo -u nobody <script-full-path>
It will fail because it has no environment context.
The solution is to add your user environment context to your script. Just add source to your .bash_profile
sed -i "2a source $HOME/.bash_profile" <script-full-path>
Your script should look like:
#!/bin/sh
source /home/<your user name>/.bash_profile
status_live=$(yarn application -list | grep -i "Streaming App")
if [ -z $status_live ]
then
echo "Running spark streaming job again at: "$(date) &
else
echo "Spark Streaming job is running, at: "$(date)
fi
Now try to run it again with user nobody, if it works than cron will work as well.
sudo -u nobody <script-full-path>
Note that cron has no standard output. and you will need to redirect standard output from your script to a log file.
<script-full-path> >> <logfile-full-path>
# $? will have the last command status in bash shell scripting
# your complete command here below and status_live is 0 if it finds in grep (i.e. true in shell if condition.)
yarn application -list | grep -i "Streaming App"
status_live=$?
echo status_live: ${status_live}
if [ "$status_live" -eq 0 ]; then
echo "success
else
echo "fail"
fi

Script stuck during read line when script is executed remotely

I want to have one script which starts a services in another server.
I have tested that the script works as expected in the server where the server is going to run.
This is the code which starts the service and monitors the log until it is in the startup process:
pkill -f "$1"
nohup java -jar -Dspring.profiles.active=$PROFILE $1 &
tail -n 0 -f nohup.out | while read LOGLINE
do
echo $LOGLINE
[[ "${LOGLINE}" == *"$L_LOG_STRING"* ]] && pkill -P $$ tail
done
This works fine as long as I execute that from that machine.
Now I want to call that script from another server:
#!/usr/bin/env bash
DESTINATION_SERVER=$1
ssh root#$DESTINATION_SERVER /bin/bash << EOF
echo "Restarting first service..."
/usr/local/starter.sh -s parameter
echo "Restarting second service..."
/usr/local/starter.sh -s parameter2
EOF
Well, everytime I try that the script of the remote server gets stuck in the "while READ" loop. But as I said, when I execute it locally from the server works fine, and in my "not simplified script" I´m not using any system variable or similar.
Update: I just tried to simplify the code even more with the following lines in the first scenario:
pkill -f "$1"
nohup java -jar -Dspring.profiles.active=$PROFILE $1 &
tail -n 0 -f nohup.out | sed "/$L_LOG_STRING/ q"
I'd say the problem is some how in the "|" through ssh, but I still can find why.
it seems that the problem comes from not having an interactive console when you execute the ssh command, therefore the nohup command behaves strangly.
I could solve it in two ways, outputing the code to the file explicitly:
"nohup java -jar -Dspring.profiles.active=test &1 >> nohup.out &"
instead of:
"nohup java -jar -Dspring.profiles.active=test &1&"
Or changing the way I access via ssh adding the tt option (just one did not work):
ssh -tt root#$DESTINATION_SERVER /bin/bash << EOF
But this last solution could lead to other problems with some character, so unless someone suggests another solution that is my patch which makes it work.

Shell script continue after failure

How do I write a shell script that continues execution even if a specific command failed, however I want to output as error later, I tried this:
#!/bin/bash
./node_modules/.bin/wdio wdio.conf.js --spec ./test/specs/login.test.js
rc=$?
echo "print here"
chown -R gitlab-runner /gpc_testes/
chown -R gitlab-runner /gpc_fontes/
exit $rc
However the script stops when the node modules command fails.
You could use
command_that_would_fail || command_failed=1
# More code and even more
.
.
if [ ${command_failed:-0} -eq 1 ]
then
echo "command_that_would_fail failed"
fi
Suppose name of the script is test.sh.
While executing the scripting, execute it with below command
./test.sh 2>>error.log
Error due bad commands won't appear on terminal but will be stored in file error.log which can be referred afterwards.

Execute root command in shell script and change to normal user after a process

I am trying to create a shell script where it uses the root access to install all the dependencies and after completing it, it exits from the root command and continue executing the script as normal user.
This is the test code:
#!/bin/sh
output=$(whoami)
if [ "$(whoami)" != "root" ]
then
su -c "echo \"hi\""
echo $output
//continue doing some installtion
exit
fi
echo $output //this should show the normal username not the root name
#!/bin/sh
su -c 'echo $(whoami)'
echo $(whoami)
When you pass the command with su following with an option -c it runs as root user, so when you want to install any dependencies you can run the following command as shown in above example.

How do I determine if a shell script is running with root permissions?

I've got a script I want to require be run with su privileges, but the interesting scripted command that will fail comes very late in the script, so I'd like a clean test up front to determine if the scrip will fail without SU capabilities.
What is a good way to do this for bash, sh, and/or csh?
bash/sh:
#!/usr/bin/env bash
# (Use #!/bin/sh for sh)
if [ `id -u` = 0 ] ; then
echo "I AM ROOT, HEAR ME ROAR"
fi
csh:
#!/bin/csh
if ( `id -u` == "0" ) then
echo "I AM ROOT, HEAR ME ROAR"
endif
You might add something like that at the beginning of your script:
#!/bin/sh
ROOTUID="0"
if [ "$(id -u)" -ne "$ROOTUID" ] ; then
echo "This script must be executed with root privileges."
exit 1
fi

Resources