Bash program simple print - windows

Bash program to print
a = 1
b = 2
pause
if [ a < b]
then
echo "in the if command"
pause
i execute the file but the program stop before the if and when i skip the pause se program close
idk why the program stop before if

Related

How to handle Ctrl + c in shell script?

I am trying to handle the ctrl + c in the shell script. I have code running in while loop but i am calling the binary from script and running it in background so when i want to stop the binary should stop. Code is below of hello.c
vim hello.c
#include <stdio.h>
int main()
{
while(1)
{
int n1,n2;
printf("Enter the first number\n");
scanf("%d",&n1);
printf("Enter the second number\n");
scanf("%d",&n2);
printf("Entered number are n1 = %d , n2 =%d\n",n1,n2);
}
}
Below is the Bash script which i used.
#/i/bin/sh
echo run the hello binary
./hello < in.txt &
trap_ctrlc()
{
ps -eaf | grep hello | grep -v grep | awk '{print $2}' | xargs kill -9
echo trap_ctrlc
exit
}
trap trap_ctrlc SIGHUP SIGINT SIGTERM
After starting the script the hello binary is running continuously. I have killed this binary from other terminal using kill -9 pid command.
I have tried this trap_ctrlc function but it not work. How to handle the Ctrl + c in shell script.
In in.txt i have added the input so i can pass this file directly to the binary
vim in.txt
1
2
Output:
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
And it going continuously.
Change your c program so it checks if reading data actually succeeded:
#include <stdio.h>
int main()
{
int n1,n2;
while(1) {
printf("Enter the first number\n");
if(scanf("%d",&n1) != 1) return 0; /* check here */
printf("Enter the second number\n");
if(scanf("%d",&n2) != 1) return 0; /* check here */
printf("Entered number are n1 = %d , n2 =%d\n",n1,n2);
}
}
It will now terminate when the input from in.txt is depleted.
To make something that reads from in.txt many times, you could create a loop in your bash script that feeds ./hello forever (or until it's killed).
Example:
#!/bin/bash
# a function to repeatedly print the content in "in.txt"
function print_forever() {
while [ 1 ];
do
cat "$1"
sleep 1
done
}
echo run the hello binary
print_forever in.txt | ./hello &
pid=$!
echo "background process $pid started"
trap_ctrlc() {
kill $pid
echo -e "\nkill=$? (0 = success)\n"
wait $pid
echo "wait=$? (the exit status from the background process)"
echo -e "\n\ntrap_ctrlc\n\n"
}
trap trap_ctrlc INT
# wait for all background processes to terminate
wait
Possible output:
$ ./hello.sh
run the hello binary
background process 262717 started
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
^C
kill=0 (0 = success)
wait=143 (the exit status from the background process)
trap_ctrlc
Another option can be to kill the child after the wait is interrupted:
#!/bin/bash
function print_forever() {
while [ 1 ];
do
cat "$1"
sleep 1
done
}
echo run the hello binary
print_forever in.txt | ./hello &
pid=$!
echo "background process $pid started"
trap_ctrlc() {
echo -e "\n\ntrap_ctrlc\n\n"
}
trap trap_ctrlc INT
# wait for all background processes to terminate
wait
echo first wait=$?
kill $pid
echo -e "\nkill=$? (0 = success)\n"
wait $pid
echo "wait=$? (the exit status from the background process)"`
``

Retry a command only once : when a command fails (in bash)

for ( i=3; i<5; i++)
do
execute some command 1
if command 2 is successful then do not run the command 1 (the for loop should continue)
if command 2 is not successful then run command 1 only once (like retry command 1 only once, after this the for loop should continue)
done
This is to note that command 2 is dependent on command 1 and command 2 can only be executed after command 1
for example:
for ( i=3; i<5; i++)
do
echo "i" >> mytext.txt ---> command 1
if "check the content of mytext.txt file to see if the value of i is actually added" ---> command 2
if it is not added then execute echo "i" >> mytext.txt (command 1) again and only once.
if i value is added to the file .. then exit and continue the loop
done
Since the "command 1" is quite big and not just an example echo statement here.I do not want to add "command 1" twice .. once outside and once inside the if condition. I want this logic in an optimized way with no redundancy of code.
Per a comment it sounds like the OP may need to invoke command 1 up to 2 times for a given $i value, but only wants to type command 1 once in the script.
Siddhartha's suggestion to use a function is probably good enough but depending on the actual command 1 (OP mentions that it's 'quite big') I'm going to play devil's advocate and assume there could be additional issues with passing some args to the function (eg, a need to escape some characters ... ??).
The general idea is to have an internal loop that can be executed at most 2 times, with logic in the loop that will allow for an 'early' exit (eg, after just one pass through the loop).
Since we're using pseudo-code I'll use the same ...
for ( i=3; i<5; i++ )
do
pass=1 # reset internal loop counter
while ( pass -le 2 )
do
echo "i" >> mytext.txt # command 1
if ( pass -eq 1 ) # after first 'command 1' execution
&& ( value of 'i' is in mytext.txt ) # command 2
then
break # break out of inner loop; alternatively ...
# pass=10 # ensure pass >= 2 to force loop to exit on this pass
fi
pass=pass+1 # on 1st pass set pass=2 => allows another pass through loop
# on 2nd pass set pass=3 => will force loop to exit
done
done
you can declare functions like
function command
{
your_command -f params
}
for ( i=3; i<5; i++)
do
if command ; then
echo "success"
else
echo "retry"
command
fi
done

wait for a background PID from another background function

Using ksh93 i'm attempting to wait for a background process ,run_cataloguer(), to finish, from within a separate background process ,send_mail(), using the script below:
#!/usr/bin/env ksh
function run_cataloguer
{
echo "In run_cataloguer()"
sleep 2
echo "leaving run_cataloguer()"
}
function send_mail
{
echo "In send_mail()"
#jobs
wait_for_cataloguer
sleep 1
echo "Leaving send_mail() "
}
function wait_for_cataloguer
{
echo "In wait_for_cataloguer() PID_CAT = $PID_CAT"
wait $PID_CAT
waitRet=$?
echo "waitRet = $waitRet"
}
run_cataloguer &
PID_CAT=$!
echo "PID_CAT = $PID_CAT"
send_mail &
wait # Wait for all
echo "Finished main"
The following output is seen:
PID_CAT = 1265
In run_cataloguer()
In send_mail()
In wait_for_cataloguer() PID_CAT = 1265
waitRet = 127 # THIS SHOULD be 0
Leaving send_mail()
leaving run_cataloguer()
Finished main
The problem is
waitRet = 127
which means the wait command can't see $PID_CAT, so it doesn't wait for run_cataloguer() to finish and
"leaving send_mail()"
is printed before
"leaving run_cataloguer()"
If I run send_mail in the foreground then waitRet = 0, which is correct.
So, it appears that you cannot wait for a background process from within a separate background process.
Also, if I uncomment the jobs command, nothing is returned , which appears to confirm the previous statement.
If anyone has a solution ,apart form using flag files, :), it would be much appreciated.
It looks like this cannot be done. The solution I used was from Parvinder here:
wait child process but get error: 'pid is not a child of this shell'

Ruby netx command after while loop

I try to launch one command using while loop and the continue my script, but the loop never finish.Condition is true i don't want to put false because the command has to be executed every 10 minutes.
while true
pid = spawn('xterm -e command')
sleep 600
Process.kill('TERM', pid)
end
The same bash code work fine because i can execute the next commands of the script using & after done
while : ; do
xterm -e command ; sleep 600 ; done &
echo $! >/tmp/mycommand.pid
In ruby does the end statement block the script in my loop ? or the true value is not appropriate here ?
If I understand right you want to create a thread:
Thread.new do
while true
sleep(1)
puts 'inside'
end
end
puts 'outside'
sleep(3)
And output:
outside
inside
inside

which loop in bash script

I am quite new in bash, but I need to create a simple script which will do below steps:
Wait 1 minute
A) bash script will use CM to generate result file
B) check row 8 in result file (to know if Administrator is running any jobs or not)
if NO jobs:
C) bash script will use CM to start cube refresh
D) wait 1 minute
D1) Remove result file
E) generate result file
E1) Read row 8
no jobs:
F) remove result file G) EXIT
yes:
I) Go to D)
YES:
E) Wait 1 minute
F) Remove result file
Go to A)
As bash doesn't have goto (or should not be use), I tried few loops, but I not sure which I should choose.
I know how to:
- start cube(step C)
- generate result file (step A & E):
- check line 8:
sed '8!d' /abc_uat/cmlogs/adm_jobs_u1.log
condition for loops will be probably similar to this: !='Owner = Administrator'
but how to avoid goto ?
I tried with while do loop, but I am not sure what should I add in case of false condition, I added else, but not sure of it:
sleep 60
Generate result file with admin jobs (which admin runs inside of 3rd party tool)
while [ sed '8!d' admin_jobs_result_file.log !="Owner = Administrator" ];
do
--NO Admin jobs
START CUBE REFRESH (it will start admin job)
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
while [ sed '8!d' admin_jobs_result_file.log = "Owner = Administrator" ];
--Admin is still running cube refresh
do
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-- it should continue checking every 1 minute if admin is still running cube refresh job, so I hope it will go back to while condition
else
done
else
-- Admin is running something
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-it should check result file again but I think it will finish loop
done
You can replace goto with a loop. while loop, for example.
Syntax
while <condition>
do
action
done
Check out cron jobs. Delegate, if possible, "waiting for a minute" task to cron. Cron should worry about running your script on a timely fashion.
You may consider writing two scripts instead of one.
Do you really need to create a result file? Do you know piping ? (no offense, just mentioning it because you said you were fairly new to bash)
Hopefully this is self explanatory.
result_file=admin_jobs_result_file.log
function generate {
logmsg sleeping
sleep 60
rm -f "$result_file"
logmsg generating
# use CM to generate result file
}
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
sed -n '8 {/Owner = Administrator/ q 0; q 1}' "$result_file"
}
function logmsg { date "+%Y-%m-%d %T -- $*"; }
##############
generate
while owner_is_administrator; do
generate
done
# at this point, line 8 does NOT contain "Owner = Administrator"
logmsg start cube refresh
# use CM to start cube refresh
generate
while owner_is_administrator; do
generate
done
logmsg Done
Looks like AIX's sed can't exit with a specified status. Try this instead:
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
awk 'NR == 8 {if (/Owner = Administrator/) {exit 0} else {exit 1}}' "$result_file"
}

Resources