curl script for twilio API [duplicate] - shell

For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?

Read a file line by line and execute commands: 4+ answers
Because the main usage of shell (and/or bash) is to run other commands, there is not only 1 answer!!
    0. Shell command line expansion
    1. xargs dedicated tool
    2. while read with some remarks
    3. while read -u using dedicated fd, for interactive processing (sample)
    5. running shell with inline generated script
Regarding the OP request: running chmod on all targets listed in file, xargs is the indicated tool. But for some other applications, small amount of files, etc...
0. Read entire file as command line argument.
If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
For small amounts of files (lines), this command is the lighter one.
1. xargs is the right tool
For bigger amount of files, or almost any number of lines in your input file...
For many binutils tools, like chown, chmod, rm, cp -t ...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
Test with seq 1 5 as input
Try this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
2. while read and variants.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat | is a useless fork, and
| while ... ;done will become a subshell whose environment will disappear after ;done.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
You may be warned about $IFS and read flags:
help read
read: read [-r] ... [-d delim] ... [name ...]
...
Reads a single line from the standard input... The line is split
into fields as with word splitting, and the first word is assigned
to the first NAME, the second word to the second NAME, and so on...
Only the characters found in $IFS are recognized as word delimiters.
...
Options:
...
-d delim continue until the first character of DELIM is read,
rather than newline
...
-r do not allow backslashes to escape any characters
...
Exit Status:
The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
3. while read, using dedicated fd.
Syntax: while read ...;done <file.txt will redirect standard input to come from file.txt. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
0 for standard input
1 for standard output
2 for standard error.
3.1 posix shell first
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
3.2 Same under bash
Under bash, you could let him choose any free fd for you and store into a variable: exec {varname}</path/to/input:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
5. filtering input file for creating shell commands
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh

Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat process.
cat is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.

if you have a nice selector (for example all .txt files in a dir)
you could do:
for i in *.txt; do chmod 755 "$i"; done
bash for loop
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt

If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel runs as many threads as your CPUs count. But you can specify it with -j

If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755

Now days (In GNU Linux) xargs still the answer for this, but ... you can now use the -a option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}

You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation
https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple

I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt

The logic applies to many other objectives.
And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh

Related

why 'ls' command printing the directory content multiple times

I have the following shell script in which I want to check the specific directory content on the remote machines and print them in a file.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line
if echo $f1 | grep -q "xx.xx.xxx";
then
ssh user#$f1 ls path/*war_* > path/$f1.txt < /dev/null; ls path/*zip_* >> path/$f1.txt < /dev/null;
ssh user#$f1 ls -d /apps/jetty*_* >> path/$f1.txt < /dev/null;
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
I am using this script inside a cron job for every 2 minute as following:
*/2 * * * * /path/myscript.sh
but somehow I am ending up with the following output file:
/apps/jetty/webapps_wars/test_new.war
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/jetty_xx.xx_2020-08-05
path/jetty_new
path/jetty_xx.xx_2020-08-05
path/jetty_new
I am not sure why am I getting the files in the list twice, sometimes 3 times. but I execute the shell directly from putty, it works fine. What do I need to change in order to correct this script?
Example:
~$ cd tmp
~/tmp$ mkdir test
~/tmp$ cd !$
cd test
~/tmp/test$ mkdir -p apps/jetty/webapp_wars/ && touch apps/jetty/webapp_wars/test_new.war
~/tmp/test$ mkdir path
~/tmp/test$ touch path/{ReleaseTest.static.zip_2020-08-05,jetty_xx.xx_2020-08-05,jetty_new}
~/tmp/test$ cd ..
~/tmp$ listpath=$(find test/path \( -name "*2020-08-05" -o -name "*new" \) )
~/tmp$ listapps=$(find test/apps/ -name "*war" )
~/tmp$ echo ${listpath[#]}" "${listapps[#]} | tr " " "\n" | sort > resultfile
~/tmp$
~/tmp$ cat resultfile
test/apps/jetty/webapp_wars/test_new.war
test/path/jetty_new
test/path/jetty_xx.xx_2020-08-05
test/path/ReleaseTest.static.zip_2020-08-05
~/tmp$ rm -rf test/ && unset listapps && unset listpath && rm resultfile
~/tmp$
This way you get only one result for each pattern you are looking for in your if...then...else block of code.
Just adapt the ssh ..... find commands and take care of quotes & parentheses but there is the easiest solution, this way you do not have to rewrite the script from scratch. And be careful on local / remote variables if you use them.
You really should not use ls but the fundamental problem is probably that three separate commands with three separate wildcards could match the same file three times.
Also, one of your commands is executed locally (you forgot to put ssh etc in front of the second one), so if the wildcard matches on your local computer, that would produce a result which doesn't reflect the situation on the remote server.
Try this refactoring.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line <- XXX this is not true
if echo "$f1" | grep -q "xx.xx.xxx";
then
ssh user#$f1 "printf '%s\n' path/*war_* path/*zip_* /apps/jetty*_*" | sort -u >path/"$f1".txt < /dev/null
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
The sort gets rid of any duplicates. This assumes none of your file names contain newlines; if they do, you'd need to use something which robustly handles them (try printf '%s\0' and sort -z but these are not portable).
ls would definitely also accept three different wildcards but like the link above explains, you really never want to use ls in scripts.

How to pipeline Permission denied file/folders into sudo chmod o+rw? [duplicate]

For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
Read a file line by line and execute commands: 4+ answers
Because the main usage of shell (and/or bash) is to run other commands, there is not only 1 answer!!
    0. Shell command line expansion
    1. xargs dedicated tool
    2. while read with some remarks
    3. while read -u using dedicated fd, for interactive processing (sample)
    5. running shell with inline generated script
Regarding the OP request: running chmod on all targets listed in file, xargs is the indicated tool. But for some other applications, small amount of files, etc...
0. Read entire file as command line argument.
If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
For small amounts of files (lines), this command is the lighter one.
1. xargs is the right tool
For bigger amount of files, or almost any number of lines in your input file...
For many binutils tools, like chown, chmod, rm, cp -t ...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
Test with seq 1 5 as input
Try this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
2. while read and variants.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat | is a useless fork, and
| while ... ;done will become a subshell whose environment will disappear after ;done.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
You may be warned about $IFS and read flags:
help read
read: read [-r] ... [-d delim] ... [name ...]
...
Reads a single line from the standard input... The line is split
into fields as with word splitting, and the first word is assigned
to the first NAME, the second word to the second NAME, and so on...
Only the characters found in $IFS are recognized as word delimiters.
...
Options:
...
-d delim continue until the first character of DELIM is read,
rather than newline
...
-r do not allow backslashes to escape any characters
...
Exit Status:
The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
3. while read, using dedicated fd.
Syntax: while read ...;done <file.txt will redirect standard input to come from file.txt. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
0 for standard input
1 for standard output
2 for standard error.
3.1 posix shell first
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
3.2 Same under bash
Under bash, you could let him choose any free fd for you and store into a variable: exec {varname}</path/to/input:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
5. filtering input file for creating shell commands
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat process.
cat is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
if you have a nice selector (for example all .txt files in a dir)
you could do:
for i in *.txt; do chmod 755 "$i"; done
bash for loop
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel runs as many threads as your CPUs count. But you can specify it with -j
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
Now days (In GNU Linux) xargs still the answer for this, but ... you can now use the -a option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation
https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
The logic applies to many other objectives.
And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh

bash call script with variable

What I want to achieve is the following :
I want the subtitles for my TV Show downloaded automatically.
The script "getSubtitle.sh" is ran as soon as the show is downloaded, but it can happen that no subtitle are released yet.
So what I am doing to counter this :
Creating a file each time "getSubtitle.sh" is ran. It contain the location of the script with its arguments, for example :
/Users/theo/logSubtitle/getSubtitle.sh "The Walking Dead - 5x10 - Them.mp4" "The.Walking.Dead.S05E10.480p.HDTV.H264.mp4" "/Volumes/Window HD/Série/The Walking Dead"
If a subtitle has been found, this file will contain only this line, if no subtitle has been found, this file will have 2 lines (the first one being "no subtitle downloaded", and the second one being the path to the script as explained above)
Now, once I get this, I'm planning to run a cron everyday that will do the following :
Remove all file that have only 1 line (Subtitle found), and execute the script again for the remaining file. Here is the full script :
cd ~/logSubtitle/waiting/
for f in *
do nbligne=$(wc -l $f | cut -c 8)
if [ "$nbligne" = "1" ]
then
rm $f
else
command=$(sed -n "2 p" $f)
sh $command 3>&1 1>&2 2>&3 | grep down > $f ; echo $command >> $f
fi
done
This is unfortunately not working, I have the feeling that the script is not called.
When I replace $command by the line in the text file, it is working.
I am sure that $command match the line because of the "echo $command >> $f" at the end of my script.
So I really don't get what I am missing here, any ideas ?
Thanks.
I'm not sure what you're trying to achieve with the cut -c 8 part in wc -l $f | cut -c 8. cut -c 8 will select the 8th character of the output of wc -l.
A suggestion: to check whether your file contains 1 or two lines (and since you'll need the content of the second line, if any, anyway), use mapfile. This will slurp the file in an array, one line per field. You can use the option -n 2 to read at most 2 lines. This will be much more efficient, safe and nice than your solution:
mapfile -t -n 2 ary < file
Then:
if ((${#ary[#]}==1)); then
printf 'File contains one line only: %s\n' "${ary[0]}"
elif ((${#ary[#]==2)); then
printf 'File contains (at least) two lines:\n'
printf ' %s\n' "${ary[#]}"
else
printf >&2 'Error, no lines found in file\n'
fi
Another suggestion: use more quotes!
With this, a better way to write your script:
#!/bin/bash
dir=$HOME/logSubtitle/waiting/
shopt -s nullglob
for f in "$dir"/*; do
mapfile -t -n 2 ary < "$f"
if ((${#ary[#]}==1)); then
rm -- "$f" || printf >&2 "Error, can't remove file %s\n" "$f"
elif ((${#ary[#]}==2)); then
{ sh -c "${ary[1]}" 3>&1 1>&2 2>&3 | grep down; echo "${ary[1]}"; } > "$f"
else
printf >&2 'Error, file %s contains no lines\n' "$f"
fi
done
After the done keyword you can even add the redirection 2>> logfile to a log file if you wish. Make sure the cron job is run with your user: check crontab -l and, if needed, edit it with crontab -e.
Use eval instead of sh. The reason it works with eval and not sh is due to the number of passes to evaluate variables. sh will treat the sed command as its command to execute while eval will evaluate the sed command first and then execute the result.
Briefly explained.

Custom unix command combination assigning to variable

I want to make UNIX script, which will automatically move my working directory files to newly created directories.
Example: In you dir you got files:
001-file.html,
001-file.rb,
002-file.html,
002-file.rb
And 2 files will be moved to ./NewDir/001-file and another 2 to ./NewDir/002-file
My problem is that after I get correct string from Unix commands I cannot assign it to variable.
Here is my code:
clear
echo "Starting script"
echo "Dir = "$(pwd)
read -p "Please enter count(max '999') of different file groups:" max_i
read -p "Enter new dir name:" outer_dir_name
for ((i=0; i<=$max_i;i++)) do
a1=$(($i/100))
a2=$((($i-$a1*100)/10))
a3=$(($i-($a2*10)-($a1*100)))
inner_dir_name=$((ls *[$a1][$a2][$a3]* 2>/dev/null | head -n 1 | cut -f1 -d"."))
echo $inner_dir_name
echo "--------------"
done
One pair of round parentheses is enough for command substitution.
inner_dir_name=$(ls *[$a1][$a2][$a3]* 2>/dev/null | head -n 1 | cut -f1 -d".")
It looks like you're going about the operation the hard way. I would probably do something like this, assuming that there are no spaces in the file names:
ls | sed 's/\..*$//' | sort -u |
while read prefix
do
mkdir -p $outer_dir_name/$prefix
mv $prefix.* $outer_dir_name/$prefix
done
The ls could be made more precise with:
ls [0-9][0-9][0-9]-file.*
If I was worried about blanks and other odd-ball characters in the file names, I'd have to use something more careful:
for file in [0-9][0-9][0-9]-file.*
do
prefix=${file%%.*}
[ -d "$outer_dir_name/$prefix" ] || mkdir -p "$outer_dir_name/$prefix"
mv "$file" "$outer_dir_name/$prefix"
done
This executes more mv commands, in general.

How do you run a command eg chmod, for each line of a file?

For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
Read a file line by line and execute commands: 4+ answers
Because the main usage of shell (and/or bash) is to run other commands, there is not only 1 answer!!
    0. Shell command line expansion
    1. xargs dedicated tool
    2. while read with some remarks
    3. while read -u using dedicated fd, for interactive processing (sample)
    5. running shell with inline generated script
Regarding the OP request: running chmod on all targets listed in file, xargs is the indicated tool. But for some other applications, small amount of files, etc...
0. Read entire file as command line argument.
If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
For small amounts of files (lines), this command is the lighter one.
1. xargs is the right tool
For bigger amount of files, or almost any number of lines in your input file...
For many binutils tools, like chown, chmod, rm, cp -t ...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
Test with seq 1 5 as input
Try this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
2. while read and variants.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat | is a useless fork, and
| while ... ;done will become a subshell whose environment will disappear after ;done.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
You may be warned about $IFS and read flags:
help read
read: read [-r] ... [-d delim] ... [name ...]
...
Reads a single line from the standard input... The line is split
into fields as with word splitting, and the first word is assigned
to the first NAME, the second word to the second NAME, and so on...
Only the characters found in $IFS are recognized as word delimiters.
...
Options:
...
-d delim continue until the first character of DELIM is read,
rather than newline
...
-r do not allow backslashes to escape any characters
...
Exit Status:
The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
3. while read, using dedicated fd.
Syntax: while read ...;done <file.txt will redirect standard input to come from file.txt. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
0 for standard input
1 for standard output
2 for standard error.
3.1 posix shell first
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
3.2 Same under bash
Under bash, you could let him choose any free fd for you and store into a variable: exec {varname}</path/to/input:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
5. filtering input file for creating shell commands
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat process.
cat is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
if you have a nice selector (for example all .txt files in a dir)
you could do:
for i in *.txt; do chmod 755 "$i"; done
bash for loop
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel runs as many threads as your CPUs count. But you can specify it with -j
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
Now days (In GNU Linux) xargs still the answer for this, but ... you can now use the -a option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation
https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
The logic applies to many other objectives.
And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh

Resources