Bash script - Run commands that correspond to the lines of a file - bash

I have a file like this (text.txt):
ls -al
ps -au
export COP=5
clear
Each line corresponds at a command. In my script, I need to read each line and launch each command.
ps: I tried all these options and with all of them I have the same problem with the command "export". In the file there is "export COP=5", but after running the script, if I do echo $COP in the same terminal, no value is displayed

while IFS= read line; do eval $line; done < text.txt
Be careful about it, it's generally not advised to use eval as it's quite powerful and as easy to be abused.
However, if there is no risk of influence from unprivileged users on text.txt it should be ok.

cat test.txt | xargs -l1 bash -c '"$#"' echo

In order to avoid confusion I would simply rename the file from text.txt to text and add a shebang (e.g. #!/bin/bash) as the first line of the file. Make sure it is executable by calling chmod +x text. Afterwards you can execute it as expected.
$ cat text
#!/bin/bash
ls -al
ps -au
clear
$ chmod +x text
$ ./text

Related

Create and write systemd service from Shell script Failed [duplicate]

This question already has answers here:
How do I use sudo to redirect output to a location I don't have permission to write to? [closed]
(15 answers)
sudo cat << EOF > File doesn't work, sudo su does [duplicate]
(5 answers)
Closed 1 year ago.
I am trying to automate the addition of a repository source in my arch's pacman.conf file but using the echo command in my shell script. However, it fails like this:-
sudo echo "[archlinuxfr]" >> /etc/pacman.conf
sudo echo "Server = http://repo.archlinux.fr/\$arch" >> /etc/pacman.conf
sudo echo " " >> /etc/pacman.conf
-bash: /etc/pacman.conf: Permission denied
If I make changes to /etc/pacman.conf manually using vim, by doing
sudo vim /etc/pacman.conf
and quiting vim with :wq, everything works fine and my pacman.conf has been manually updated without "Permission denied" complaints.
Why is this so? And how do I get sudo echo to work? (btw, I tried using sudo cat too but that failed with Permission denied as well)
As #geekosaur explained, the shell does the redirection before running the command. When you type this:
sudo foo >/some/file
Your current shell process makes a copy of itself that first tries to open /some/file for writing, then if that succeeds it makes that file descriptor its standard output, and only if that succeeds does it execute sudo. This is failing at the first step.
If you're allowed (sudoer configs often preclude running shells), you can do something like this:
sudo bash -c 'foo >/some/file'
But I find a good solution in general is to use | sudo tee instead of > and | sudo tee -a instead of >>. That's especially useful if the redirection is the only reason I need sudo in the first place; after all, needlessly running processes as root is precisely what sudo was created to avoid. And running echo as root is just silly.
echo '[archlinuxfr]' | sudo tee -a /etc/pacman.conf >/dev/null
echo 'Server = http://repo.archlinux.fr/$arch' | sudo tee -a /etc/pacman.conf >/dev/null
echo ' ' | sudo tee -a /etc/pacman.conf >/dev/null
I added > /dev/null on the end because tee sends its output to both the named file and its own standard output, and I don't need to see it on my terminal. (The tee command acts like a "T" connector in a physical pipeline, which is where it gets its name.) And I switched to single quotes ('...') instead of doubles ("...") so that everything is literal and I didn't have to put a backslash in front of the $ in $arch. (Without the quotes or backslash, $arch would get replaced by the value of the shell parameter arch, which probably doesn't exist, in which case the $arch is replaced by nothing and just vanishes.)
So that takes care of writing to files as root using sudo. Now for a lengthy digression on ways to output newline-containing text in a shell script. :)
To BLUF it, as they say, my preferred solution would be to just feed a here-document into the above sudo tee command; then there is no need for cat or echo or printf or any other commands at all. The single quotation marks have moved to the sentinel introduction <<'EOF', but they have the same effect there: the body is treated as literal text, so $arch is left alone:
sudo tee -a /etc/pacman.conf >/dev/null <<'EOF'
[archlinuxfr]
Server = http://repo.archlinux.fr/$arch
EOF
But while that's how I'd do it, there are alternatives. Here are a few:
You can stick with one echo per line, but group all of them together in a subshell, so you only have to append to the file once:
(echo '[archlinuxfr]'
echo 'Server = http://repo.archlinux.fr/$arch'
echo ' ') | sudo tee -a /etc/pacman.conf >/dev/null
If you add -e to the echo (and you're using a shell that supports that non-POSIX extension), you can embed newlines directly into the string using \n:
# NON-POSIX - NOT RECOMMENDED
echo -e '[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n ' |
sudo tee -a /etc/pacman.conf >/dev/null
But as it says above, that's not POSIX-specified behavior; your shell might just echo a literal -e followed by a string with a bunch of literal \ns instead. The POSIX way of doing that is to use printf instead of echo; it automatically treats its argument like echo -e does, but doesn't automatically append a newline at the end, so you have to stick an extra \n there, too:
printf '[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n \n' |
sudo tee -a /etc/pacman.conf >/dev/null
With either of those solutions, what the command gets as an argument string contains the two-character sequence \n, and it's up to the command program itself (the code inside printf or echo) to translate that into a newline. In many modern shells, you have the option of using ANSI quotes $'...', which will translate sequences like \n into literal newlines before the command program ever sees the string. That means such strings work with any command whatsoever, including plain old -e-less echo:
echo $'[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n ' |
sudo tee -a /etc/pacman.conf >/dev/null
But, while more portable than echo -e, ANSI quotes are still a non-POSIX extension.
And again, while those are all options, I prefer the straight tee <<EOF solution above.
The problem is that the redirection is being processed by your original shell, not by sudo. Shells are not capable of reading minds and do not know that that particular >> is meant for the sudo and not for it.
You need to:
quote the redirection ( so it is passed on to sudo)
and use sudo -s (so that sudo uses a shell to process the quoted redirection.)
http://www.innovationsts.com/blog/?p=2758
As the instructions are not that clear above I am using the instructions from that blog post. With examples so it is easier to see what you need to do.
$ sudo cat /root/example.txt | gzip > /root/example.gz
-bash: /root/example.gz: Permission denied
Notice that it’s the second command (the gzip command) in the pipeline that causes the error. That’s where our technique of using bash with the -c option comes in.
$ sudo bash -c 'cat /root/example.txt | gzip > /root/example.gz'
$ sudo ls /root/example.gz
/root/example.gz
We can see form the ls command’s output that the compressed file creation succeeded.
The second method is similar to the first in that we’re passing a command string to bash, but we’re doing it in a pipeline via sudo.
$ sudo rm /root/example.gz
$ echo "cat /root/example.txt | gzip > /root/example.gz" | sudo bash
$ sudo ls /root/example.gz
/root/example.gz
sudo bash -c 'echo "[archlinuxfr]" >> /etc/pacman.conf'
STEP 1 create a function in a bash file (write_pacman.sh)
#!/bin/bash
function write_pacman {
tee -a /etc/pacman.conf > /dev/null << 'EOF'
[archlinuxfr]
Server = http://repo.archlinux.fr/\$arch
EOF
}
'EOF' will not interpret $arch variable.
STE2 source bash file
$ source write_pacman.sh
STEP 3 execute function
$ write_pacman
append files (sudo cat):
cat <origin-file> | sudo tee -a <target-file>
append echo to file (sudo echo):
echo <origin> | sudo tee -a <target-file>
(EXTRA) disregard the ouput:
echo >origin> | sudo tee -a <target-file> >/dev/null

bash: run multiple commands each separately with a command prompt

I want to run multiple commands like they are executed one at a time on command prompt
Eg i have the following list of commands
ls
pwd
du -sh
Now i try to copy paste them and run:
$ ls
pwd
du -sh
file1.txt file2.txt
/home/user/test
1M .
but instead i want to get them executed separately. So that i can see their outputs like below
$ ls
file1.txt file2.txt
$ pwd
/home/user/test
$ du -sh
1M .
So is it possible if i a have a list of commands to paste them in such a way that they can execute as if one per command prompt. Else the only option is paste one command at a time.
Generally i get a list of commands to get executed.
While pasting essentially works the way you describe, it may end up looking cosmetically wrong when the input (and its local echo) shows up while the shell is still busy executing the previous command.
You could instead feed the commands to bash -i, which will read and execute them in turn, showing the prompt:
$ mypaste() { x="$(cat)"; bash -i <<< "$x"; }
$ mypaste # Now paste some commands and hit ctrl-d
ls
pwd
whoami
^D
This results in:
you#yourdir $ ls
some files
you#yourdir $ pwd
/home/you/yourdir
you#yourdir $ whoami
you
you#yourdir $ exit
$
nano myscript.sh or your favorite editor and paste the following.
#!/bin/bash
ls
pwd
du -sh
make it executable with chmod +x myscript.sh and run the script with
./myscript.sh
You can run any bash commands and see outputs
Try each command separated by semicolon:
ls; pwd; du -sh;
This will make it batch of commands. Shell will execute one by one and you don't have to paste each command separately.
Hope this helps.
The answer from that other guy worked and I used a slightly modified version using heredoc.
I wanted to script a sequence of commands that show the prompt so I could copy/paste on different systems and show how to replicate a bug.
simple version
bash -i << 'EOF'
echo "command one"
echo "command two"
EOF
more commands and pretty output
bash -i << 'EOF' && echo -e '\e[1A\e[K==========================================='
unset PROMPT_COMMAND; PS1='command-sequence:$ ' ; clear ; echo "==========================================="
mkdir /tmp/demo-commands
echo "file contents" > /tmp/demo-commands/file
cd /tmp/demo-commands
pwd
ls
cat file
rm file
rm -r /tmp/demo-commands
EOF
I customize the prompt and use echo -e '\e[1A\e[K to replace the last line with a separator

What is wrong this simple history script?

I am missing something really simple I think:
$ cat hs.sh
#!/bin/bash
echo $1
history | grep -i $1
echo $#
exit
$
here is output:
$ ./history_search sed
sed
1
$
Trying to create a script which I can use in form of './hs.sh sed' to search for all sed commands in history. I can create an alias using this which works fine, but not this script.
Here is the alias:
alias hg='history | grep -i $1'
Interactive shells have history; scripted shells do not have history. You can only ask for history from an interactive shell, which is why the alias works but the script does not.
When you run this as a shell script, it spawns a new shell that has no history.
Try running it in the same shell like this:
source ./history_search see
and it should work.

How to refer to redirection file from within a bash script?

I'd like to write a bash script myscript such that issuing this command:
myscript > filename.txt
would return the name of the filename that it's output is being redirected to, filename.txt. Is this possible?
If you are running on Linux, check where /proc/self/fd/1 links to.
For example, the script can do the following:
#!/bin/bash
readlink /proc/self/fd/1
And then run it:
$ ./myscript > filename.txt
$ cat filename.txt
/tmp/filename.txt
Note that if you want to save the value of the output file to a variable or something, you can't use /proc/self since it will be different in the subshell, but you can still use $$:
outputfile=$(readlink /proc/$$/fd/1)
Using lsof:
outfile=$(lsof -p $$ | awk '/1w/{print $NF}')
echo $outfile

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

Resources