Using DOS file contents as command line arguments in BASH - bash

This is a follow-up to this question's answer.
How can I modify the code so that the annoying CRLF of a DOS created file can be stripped away before being passed to xargs?
Example file 'arglist.dos'.
# cat > arglist.unix
src/file1 dst/file1
src/file2 dst/file2
src/file3 dst/file3
^c
# sed 's/$/\r/' arglist.unix > arglist.dos
The unix variant of the file works with this:
$ xargs -n2 < arglist.unix echo cp
cp src/file1 dst/file1
cp src/file2 dst/file2
cp src/file3 dst/file3
For my own education, how can I change it to accept either the 'arglist.unix' or 'arglist.dos' files on the same command line?

cat arglist.dos | tr -d "\r" | xargs -n2 echo cp
gives you the same result as
cat arglist.unix | tr -d "\r" | xargs -n2 echo cp
so it works on both files.
tr -d "\r" removes all the CR characters

Use d2u to remove the CR before passing the file to xargs.

Related

Pipe output from command to another command

A bash function, prepend_line, takes two parameters: a string and a fully-qualified path to a file. It's used for logging, inserting the current date/time and the string at the top of the log file.
Standalone use works fine: prepend_line "test string" "$log_file"
How can I get the output from a command, e.g. mv -fv "$fileOne" "$fileTwo" to be used as the first parameter for prepend_line?
I've tried various combinations of piping to xargs, but I don't understand how it works and I'm not convinced it's the best way in any case.
If you really have to:
export -f prepend_line
mv -fv "$fileOne" "$fileTwo" |
xargs -0 bash -c 'prepend_line "$1" "$log_file"' --
The -0 parses the line as beeing zero delimetered. As there should be no zeros in mv -v output, as filenames can't have a zero byte, you will get only a single element. This element/line will be passed as the first argument to the bash subshell.
Tested with:
prepend_line() {
printf "%s\n" "$#" | xxd -p
}
fileOne=$'1\x01\x02\x031234566\n\t\e'
fileTwo=$'2\x01\x02\x031234566\n\t\e \n\n\n'
export -f prepend_line
printf "%s\n" "$fileOne -> $fileTwo" |
xargs -0 bash -c 'prepend_line "$1" "$log_file"' --
The script will output (output from the xxd -p inside prepend_line):
31010203313233343536360a091b202d3e2032010203313233343536360a
091b200a0a0a0a0a0a
Same hex output with some extra newlines and comments:
# first filename $'1\x01\x02\x031234566\n\t\e'
31010203313233343536360a091b
# the string: space + '->' + space
202d3e20
# second filename $'2\x01\x02\x031234566\n\t\e \n\n\n'
32010203313233343536360a091b200a0a0a0a0a0a
If you really have to parse some strange input's you can convert your string to hex with xxd -p. Then, later, convert it back to machine representation with xxd -r -p and streaming right into the output:
prepend_line() {
# some work
# append the output of the "$1" command to the log_file
<<<"$1" xxd -p -r >> "$2"
# some other work
}
prepend_line "$(mv -fv "$fileOne" "$fileTwo" | xxd -p)" "$log_file"
But I doubt you will ever need to handle such cases. Who names filenames using $'\x01' and suffixes with empty newlines 'great_script.sh'$'\n\n'?
Anyway, objectively I would rather see the interface as using a stream:
mv -fv "$fileOne" "$fileTwo" | prepend_line "$log_file"
It needs set -o pipefail to propagate errors correctly. Inside prepend_line I would just redirect the output to the log file or some temporary file, sparing the need of parsing and corner cases.

curl script for twilio API [duplicate]

For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
Read a file line by line and execute commands: 4+ answers
Because the main usage of shell (and/or bash) is to run other commands, there is not only 1 answer!!
    0. Shell command line expansion
    1. xargs dedicated tool
    2. while read with some remarks
    3. while read -u using dedicated fd, for interactive processing (sample)
    5. running shell with inline generated script
Regarding the OP request: running chmod on all targets listed in file, xargs is the indicated tool. But for some other applications, small amount of files, etc...
0. Read entire file as command line argument.
If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
For small amounts of files (lines), this command is the lighter one.
1. xargs is the right tool
For bigger amount of files, or almost any number of lines in your input file...
For many binutils tools, like chown, chmod, rm, cp -t ...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
Test with seq 1 5 as input
Try this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
2. while read and variants.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat | is a useless fork, and
| while ... ;done will become a subshell whose environment will disappear after ;done.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
You may be warned about $IFS and read flags:
help read
read: read [-r] ... [-d delim] ... [name ...]
...
Reads a single line from the standard input... The line is split
into fields as with word splitting, and the first word is assigned
to the first NAME, the second word to the second NAME, and so on...
Only the characters found in $IFS are recognized as word delimiters.
...
Options:
...
-d delim continue until the first character of DELIM is read,
rather than newline
...
-r do not allow backslashes to escape any characters
...
Exit Status:
The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
3. while read, using dedicated fd.
Syntax: while read ...;done <file.txt will redirect standard input to come from file.txt. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
0 for standard input
1 for standard output
2 for standard error.
3.1 posix shell first
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
3.2 Same under bash
Under bash, you could let him choose any free fd for you and store into a variable: exec {varname}</path/to/input:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
5. filtering input file for creating shell commands
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat process.
cat is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
if you have a nice selector (for example all .txt files in a dir)
you could do:
for i in *.txt; do chmod 755 "$i"; done
bash for loop
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel runs as many threads as your CPUs count. But you can specify it with -j
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
Now days (In GNU Linux) xargs still the answer for this, but ... you can now use the -a option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation
https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
The logic applies to many other objectives.
And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh

how to compress the output of cat command

The title is not good at all, but here is an example of the result of cat :
/var/oracle/oradata/DB11G/system01.dbf
/var/oracle/oradata/DB11G/sysaux01.dbf
/var/oracle/oradata/DB11G/undotbs01.dbf
/var/oracle/oradata/DB11G/users01.dbf
/var/oracle/oradata/DB11G/example01.dbf
/var/oracle/oradata/jabba/jabba01.dbf
/var/oracle/oradata/DB11G/control01.ctl
/var/oracle/flash_recovery_area/DB11G/control02.ctl
/var/oracle/oradata/DB11G/redo03.log
/var/oracle/oradata/DB11G/redo02.log
/var/oracle/oradata/DB11G/redo01.log
The cat command gives the path to this files
I need to compress this files into a tar.gz
How can i do it?
As a simple workaround, if your version of tar does not support --files-from option, you can use tr to produce a command line of files.
Given:
$ cat files.txt
/var/oracle/oradata/DB11G/system01.dbf
/var/oracle/oradata/DB11G/sysaux01.dbf
/var/oracle/oradata/DB11G/undotbs01.dbf
/var/oracle/oradata/DB11G/users01.dbf
/var/oracle/oradata/DB11G/example01.dbf
/var/oracle/oradata/jabba/jabba01.dbf
/var/oracle/oradata/DB11G/control01.ctl
/var/oracle/flash_recovery_area/DB11G/control02.ctl
/var/oracle/oradata/DB11G/redo03.log
/var/oracle/oradata/DB11G/redo02.log
/var/oracle/oradata/DB11G/redo01.log
You can do:
$ cat files.txt | tr '\n' ' '
/var/oracle/oradata/DB11G/system01.dbf /var/oracle/oradata/DB11G/sysaux01.dbf /var/oracle/oradata/DB11G/undotbs01.dbf /var/oracle/oradata/DB11G/users01.dbf /var/oracle/oradata/DB11G/example01.dbf /var/oracle/oradata/jabba/jabba01.dbf /var/oracle/oradata/DB11G/control01.ctl /var/oracle/flash_recovery_area/DB11G/control02.ctl /var/oracle/oradata/DB11G/redo03.log /var/oracle/oradata/DB11G/redo02.log /var/oracle/oradata/DB11G/redo01.log
Then use that on the command line of tar.
Most tar implementations (I believe) will also not choke on \n in the list of files, so you can do this directly:
$ tar -c $(cat files.txt)
Or try:
$ tar -c $(cat files.txt | tr '\n' ' ')
Of course if your tar supports --files-from, use that.

Creating a file using parameters of other file using UNIX Shell Script

I have a series of files
484_mexico_201401.dat
484_mexico_201402.dat
484_mexico_201403.dat
… so on
I want to make files
484_mexico_201401.mft which will have below containt
484 | datfile name | line count for the .dat file
Example:
484|484_mexico_201401.dat|6000
can anyone help with a shell script for this ?
You can try bash,
for file in 484_*
do
new_file=${file%.*};
echo "$(sed 's/^\([^_]\+\)_.*/\1/'<<<$file)|$file|$(wc -l $file|cut -d' ' -f1)" > "$new_file.mft";
done
you can also try this.
location="./";for file in $(ls $location);do echo "$(echo $file|cut -d '_' -f 1)|$file|$(wc -l $file | cut -d ' ' -f 1)" >> output.txt;done
Then you will be able to read the new file buy typing cat output.txt
If you want to full script for it, then you may need to add the #!/bin/bash to the first line in the script.
#!/bin/bash
location="./";for file in $(ls $location);do echo "$(echo $file|cut -d '_' -f 1)|$file|$(wc -l $file | cut -d ' ' -f 1)" >> output.txt;done
save that into a file where you want the script to be then run chmod 555 scriptname.sh and you should be able to run.

How do you run a command eg chmod, for each line of a file?

For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
Read a file line by line and execute commands: 4+ answers
Because the main usage of shell (and/or bash) is to run other commands, there is not only 1 answer!!
    0. Shell command line expansion
    1. xargs dedicated tool
    2. while read with some remarks
    3. while read -u using dedicated fd, for interactive processing (sample)
    5. running shell with inline generated script
Regarding the OP request: running chmod on all targets listed in file, xargs is the indicated tool. But for some other applications, small amount of files, etc...
0. Read entire file as command line argument.
If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
For small amounts of files (lines), this command is the lighter one.
1. xargs is the right tool
For bigger amount of files, or almost any number of lines in your input file...
For many binutils tools, like chown, chmod, rm, cp -t ...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
Test with seq 1 5 as input
Try this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
2. while read and variants.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat | is a useless fork, and
| while ... ;done will become a subshell whose environment will disappear after ;done.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
You may be warned about $IFS and read flags:
help read
read: read [-r] ... [-d delim] ... [name ...]
...
Reads a single line from the standard input... The line is split
into fields as with word splitting, and the first word is assigned
to the first NAME, the second word to the second NAME, and so on...
Only the characters found in $IFS are recognized as word delimiters.
...
Options:
...
-d delim continue until the first character of DELIM is read,
rather than newline
...
-r do not allow backslashes to escape any characters
...
Exit Status:
The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
3. while read, using dedicated fd.
Syntax: while read ...;done <file.txt will redirect standard input to come from file.txt. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
0 for standard input
1 for standard output
2 for standard error.
3.1 posix shell first
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
3.2 Same under bash
Under bash, you could let him choose any free fd for you and store into a variable: exec {varname}</path/to/input:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
5. filtering input file for creating shell commands
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat process.
cat is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
if you have a nice selector (for example all .txt files in a dir)
you could do:
for i in *.txt; do chmod 755 "$i"; done
bash for loop
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel runs as many threads as your CPUs count. But you can specify it with -j
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
Now days (In GNU Linux) xargs still the answer for this, but ... you can now use the -a option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation
https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
The logic applies to many other objectives.
And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh

Resources