bash shell prompt does not return (hangs) for about 2 mins after execution of wget - bash

wget finishes transferring files in about 10s but gets stuck after the transfer for about 2mins after the transfer before returning to the bash shell. Another user on the same system gets the command prompt quickly after the wget command is executed.
Using CentOS 6.3 Linux. Have not made any changes to the .bashrc files.

if your problem is just returning to prompt using :
wget url &
you will have prompt immediately
I usually use
nohup wget url &
for getting better result

This is work for me
add single quote at the front and the end of URL link can resolve this issue.

Related

bash script, how do I launch firefox

I wish to write a bash script such that it launches Symfony built-in Web Server, hence Firefox. The following simple minded script fails because - I am not sure how to describe it by the correct jargon - the shell gets busy by the first task. I guess it is simple, but I am newbie on this. Thanks.
#!/bin/bash
cd /var/www/mySymfonyProj
php bin/console server:run localhost:8080
/usr/bin/firefox http://localhost:8080
(moved comment to answer in order to "resolve" the question).
Add an & after the 4th line of the script to run that process in the background - the shell will launch that process, and move onto the next line (but will wait for the 5th line's command to finish).
At the end of the script, you may want to call wait to wait for the server to terminate, if that's desired.
#!/bin/bash
cd /var/www/mySymfonyProj
php bin/console server:run localhost:8080 &
/usr/bin/firefox http://localhost:8080
wait
For more information on job control, look at this source. It doesn't cover everything useful, but it covers a fair amount.
I'd mention that $! returns the PID of the process just executed, so you can keep track of the PIDs of various background tasks then use wait to delay until they've returned - that's often useful.

bash ignores & for last command in loop

I just wrote my first bash script to start some redis instances on a development server. While it is mostly working, the last opened redis instance is blocking the active terminal – though I have the trailing & sign and the other started instances aren't blocking the terminal. How would I push them all to the background?
Here's the script:
#!/bin/bash
REDIS=(6379 6380 6381 6382 6383 6390 6391 6392 6393)
for i in "${REDIS[#]}"
do
:
redis-server --port $i &
done
It sounds like your terminal is not actually blocked, your prompt just got overwritten. It's a purely cosmetic issue. Due to the way terminals work, bash doesn't know to redraw it so it looks like the command is in the foreground.
Run the script again, and blindly type lsEnter. You'll probably see that the shell responds as normal, even though you can't see the prompt.
You can alternatively just hit Enter to get bash to redraw the prompt.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

How to prevent PuTTY shell from auto-exit after executing command from batch file in Windows?

I have written a batch file like this:
Start putty.exe -ssh 172.17.0.52 -l root -m dummy.txt
Then in dummy.text I have written this command:
avahi-daemon --no-drop-root -D
export XVHMI_USERCONFIG_PATH=/home/UserProfileConfig
export XDG_RUNTIME_DIR=/tmp
cd /opt/bosch/airis/bin
When I run the .bat file, PuTTY starts, commands execute (hopefully, not sure) and it exits.
How to keep that window open?
I have googled for the same, but no solid help. I read on stack overflow itself that we need to define something in txt file, but what and most importantly how?
The SSH session closes (and PuTTY with it) as soon as the command finishes. Normally the "command" is shell. As you have overridden this default "command" and yet you want to run the shell nevertheless, you have to explicitly execute the shell yourself:
avahi-daemon ... ; /bin/bash
Also as use of -m switch implies a non-interactive terminal, you probably want to force an interactive terminal back using -t switch.
Though, I'm not really sure if you want to execute shell or if you just want to see your command output. If the latter, did you consider using plink? It's console terminal client from PuTTY package. Being console application, it inherits console of parent batch file, and you can pause the batch console from closing using pause command, if needed.
Another option (both for PuTTY and plink) is to pause on remote side. E.g. Using read command.
avahi-daemon ... ; read
As suggested by Martin I tried this step:
putty.exe -ssh 172.17.0.52 -l root -m dummy.txt -t
added /bin/bash at the end of commands in dummy.txt
It worked for me. Please note, you have to follow both the steps as mentioned above.
This way you can keep the session alive and can manually execute further commands.

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Resources