I wrote a shell script with the following set of commands.
sudo service apache2 reload
sudo service apache2 restart
curl -v http://api.myapi.com/API/firstApi #1
curl -v http://api.myapi.com/API/secondApi #2
echo "Success"
The second curl call(#2) will take almost 1 minute to finish the process. When I run this command from commandline it is working fine. Taking almost 1 minute and printing the response. But when I execute it from a shell script, it exits very fast, the expected process is not happening in the backend, but printing the response. I don't have any clue why it is happening. I tried &&, but not working.
Any clue on this?
Related
Due to some issues I wont elaborate here to not waste time, I made a bash script which will ping google every 10 minutes and if there is a response it will keep the loop running and if not then the PC will restart. After a lot of hurdle I have been able to make the script and also make it start on bootup. However the issue is that i want to see the results on the terminal, meaning I want to keep monitoring it but the terminal does not open on bootup. But it does open if I run it as ./net.sh.
The script is running on startup, that much I know because I use another script to open an application and it works flawlessly.
My system information
NAME="Linux Mint"
VERSION="18.3 (Sylvia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.3"
VERSION_ID="18.3"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=sylvia
UBUNTU_CODENAME=xenial
The contents of my net.sh bash script are
#! /bin/bash
xfce4-terminal &
sleep 30
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
I have put the scripts in /usr/bin and inserted the scripts for startup at boot in /etc/rc.local
So I did some further research and with help from reddit I realized that the reason I couldnt get it to show on terminal was because the script was starting on bootup but I needed it to start after user login. So I added the script on startup application (which can be found searching on start menu if thats whats it called). But it was still giving issues so I divided the script in two parts.
I put the net.sh script on startup and directed that script to open my main script which i named net_loop.sh
This is how the net.sh script looks
#! /bin/bash
sleep 20
xfce4-terminal -e usr/bin/net_loop.sh
And the net_loop.sh
#! /bin/bash
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
The results are the results of the net_loop.sh script are open in another terminal.
Note: I used help from this thread
If minute interval is usable why not use "cron" to start your?
$> crontab –e
or
$> sudo crontab –e
I managed to start JBOSS service through a shell script running locally inside the server.
if [ $? -eq 0 ]; then
{
sh /jboss-6.1.0.Final/bin/run.sh -c server1 -g app1 -u x.x.x.x -b x.x.x.x -Djboss.messaging.ServerPeerID=1 &
}; fi
My problem is I am able to start the service and found the application working, but once the script finishes running, it is not returning to shell ($ prompt) back and keep on hanging there forever. When I run the same command directly (without script), after the command finishes running, on hitting Enter key, I can get my $ prompt and I shall do other works.
Can someone tell me what I am missing in my code so that I can return back to my $ prompt.
Remove & from the shell script. Also remove {} from if block , no need.
If I run this shell script, I get exec causing wget to act goofy.
echo "wget http://www.google.com/ -O - >> output.html" > /tmp/mytext
while read line
do
exec $line
done < /tmp/mytext
It acts like wget is operating on three different urls
wget http://www.google.com/ -O -
wget >>
wget output.html
The first command spits the output to STDOUT and the next two wget commands fail as they are nonsense.
How do I get exec to work properly?
I'm using exec instead of simply calling bash on the file because if I use exec on a large list of wget calls I get more than one wget process spawned. Simply calling bash on a file with a large list of urls is slow as it waits for one wget operation to complete before moving onto the next one.
Versions:
GNU Wget 1.15 built on linux-gnu.
GNU bash, version 4.3.0(1)-release (i686-pc-linux-gnu)
I'm using exec instead of simply calling bash on the file because if I
use exec on a large list of wget calls I get more than one wget
process spawned.
No. When exec is called, it does not spawn a new process. It replaces the existing process. See man bash for details.
Simply calling bash on a file with a large list of urls is slow as it
waits for one wget operation to complete before moving onto the next
one.
True. Fortunately, there is a solution. To run a lot of processes in parallel, run them in the background. For example, to run many wget processes in parallel, use:
while read url
do
wget "$url" -O - >> output.html &
done <list_of_urls
The ampersand at the end of the line causes that that command to run in the background in parallel with everything else. The above code will start new wget processes as fast as it can. Those processes will continue until they complete.
You can experiment with this idea very simply at the command prompt. Run
sleep 10s
and your command prompt will disappear for 10 seconds. However, run:
sleep 10s &
and your command prompt will return immediately while sleep runs in the background.
man bash explains:
If a command is terminated by the control operator &,
the shell executes the command in the background in a
subshell. The shell does not wait for the command to
finish, and the return status is 0.
I think, you can use
exec $(cat /tmp/mytext)
I am running Shopify Dashboard on Centos 6 (http://shopify.github.io/dashing/). I wish to start this on boot and via a cron when I pull down an update from git.
I have the following code in a bash script which is the same code I run via the command line to start dashboard.
#!/bin/bash
cd /usr/share/dashboard/
dashing start -p 500 -d
running the actual script as the root user from the command line starts the application no problem.
However when this script is run via a cron or on boot then the application is never started.
If anyone could shed some light as to why this is the case it would most appreciated.
Per my comment I am still not 100% sure that the script is being run as root. I would add a line in the script:
echo $user > /tmp/test.txt
Then run the script via cron and see what the value of the file is.
Also I question your script. Why is it necessary to cd?
How about
/usr/share/dashboard/dashing start -p 500 -d
Also you may have to do a nohup, which is the no hang up signal, so ...
nohup /usr/share/dashboard/dashing start -p 500 -d
Those are my guesses.
wget finishes transferring files in about 10s but gets stuck after the transfer for about 2mins after the transfer before returning to the bash shell. Another user on the same system gets the command prompt quickly after the wget command is executed.
Using CentOS 6.3 Linux. Have not made any changes to the .bashrc files.
if your problem is just returning to prompt using :
wget url &
you will have prompt immediately
I usually use
nohup wget url &
for getting better result
This is work for me
add single quote at the front and the end of URL link can resolve this issue.