I'm experimenting with some arguments for the rename command by using -n option to do "dry runs". How to make it output into a file so I can analyze more? The following does not work -- the resultant rename.log is empty:
bash$ echo "XXX" > \"XXX\"__XXX.txt
$ rename -n 's/"([^\/"《》]+)"__(.*)/“$1”__$2/' '{}' \; *.txt > rename.log
'"XXX"__XXX.txt' would be renamed to '“XXX”__XXX.txt'
Mark's comment is correct, it seems the -n option outputs on stderr. So you can run a command like this:
rename -n [options] > rename.log 2>&1
If you wanted to pipe the output to another command (as I was trying to do), put the redirection before the pipe:
rename -n [options] 2>&1 | less
Related
I'm trying to redirect a command's output in a file only if the command has been successful because I don't want it to erase its content when it fails.
(command is reading the file as input)
I'm currently using
cat <<< $( <command> ) > file;
Which erases the file if it fails.
It's possible to do what I want by storing the output in a temp file like that:
<command> > temp_file && cat temp_file > file
But it looks kinda messy to me, I want avoid manually creating temp files (I know <<< redirection is creating a temp file)
I finally came up with this trick
cat <<< $( <command> || cat file) > file;
Which will not change the contents of the file... but which is even more messy I guess.
Perhaps capture the output into a variable, and echo the variable into the file if the exit status is zero:
output=$(command) && echo "$output" > file
Testing
$ out=$(bash -c 'echo good output') && echo "$out" > file
$ cat file
good output
$ out=$(bash -c 'echo bad output; exit 1') && echo "$out" > file
$ cat file
good output
Remember, the > operator replaces the existing contents of the file with the output of the command. If you want to save the output of multiple commands to a single file, you’d use the >> operator instead. This will append the output to the end of the file.
For example, the following command will append output information to the file you specify:
ls -l >> /path/to/file
So, for log the command output only if it success, you can try something like this:
until command
do
command >> /path/to/file
done
How can I suppress error messages for a shell command?
For example, if there are only jpg files in a directory, running ls *.zip gives an error message:
$ ls *.zip
ls: cannot access '*.zip': No such file or directory
Is there an option to suppress such error messages? I want to use this command in a Bash script, but I want to hide all errors.
Most Unix commands, including ls, will write regular output to standard output and error messages to standard error, so you can use Bash redirection to throw away the error messages while leaving the regular output in place:
ls *.zip 2> /dev/null
$ ls *.zip 2>/dev/null
will redirect any error messages on stderr to /dev/null (i.e. you won't see them)
Note the return value (given by $?) will still reflect that an error occurred.
To suppress error messages and also return the exit status zero, append || true. For example:
$ ls *.zip && echo hello
ls: cannot access *.zip: No such file or directory
$ ls *.zip 2>/dev/null && echo hello
$ ls *.zip 2>/dev/null || true && echo hello
hello
$ touch x.zip
$ ls *.zip 2>/dev/null || true && echo hello
x.zip
hello
I attempted ls -R [existing file] and got an immediate error.
ls: cannot access 'existing file': No such file or directory
So, I used the following:
ls -R 2>dev/null | grep -i [existing file]*
ls -R 2>dev/null | grep -i text*
Or, in your case:
ls -R 2>dev/null | grep -i *.zip
My solution with a raspberry pi3 with buster.
ls -R 2>/dev/null | grep -i [existing file]*
2>/dev/null is very usefull with Bash script to avoid useless warnings or errors.
Do not forget slash caracter
How do you use a command line argument as a file path and check for file existence in Bash?
I have the simple Bash script test.sh:
#!/bin/bash
set -e
echo "arg1=$1"
if [ ! -f "$1" ]
then
echo "File $1 does not exist."
exit 1
fi
echo "File exists!"
and in the same directory, I have a data folder containing stuff.txt.
If I run ./test.sh data/stuff.txt I see the expected output:
arg1=data/stuff.txt
"File exists!"
However, if I call this script from a second script test2.sh, in the same directory, like:
#!/bin/bash
fn="data/stuff.txt"
./test.sh $fn
I get the mangled output:
arg1=data/stuff.txt
does not exist
Why does the call work when I run it manually from a terminal, but not when I run it through another Bash script, even though both are receiving the same file path? What am I doing wrong?
Edit: The filename does not have spaces. Both scripts are executable. I'm running this on Ubuntu 18.04.
The filename was getting an extra whitespace character added to it as a result of how I was retrieving it in my second script. I didn't note this in my question, but I was retrieving the filename from folder list over SSH, like:
fn=$(ssh -t "cd /project/; ls -t data | head -n1" | head -n1)
Essentially, I wanted to get the filename of the most recent file in a directory on a remote server. Apparently, head includes the trailing newline character. I fixed it by changing it to:
fn=$(ssh -t "cd /project/; ls -t data | head -n1" | head -n1 | tr -d '\n' | tr -d '\r')
Thanks to #bigdataolddriver for hinting at the problem likely being an extra character.
How can I suppress error messages for a shell command?
For example, if there are only jpg files in a directory, running ls *.zip gives an error message:
$ ls *.zip
ls: cannot access '*.zip': No such file or directory
Is there an option to suppress such error messages? I want to use this command in a Bash script, but I want to hide all errors.
Most Unix commands, including ls, will write regular output to standard output and error messages to standard error, so you can use Bash redirection to throw away the error messages while leaving the regular output in place:
ls *.zip 2> /dev/null
$ ls *.zip 2>/dev/null
will redirect any error messages on stderr to /dev/null (i.e. you won't see them)
Note the return value (given by $?) will still reflect that an error occurred.
To suppress error messages and also return the exit status zero, append || true. For example:
$ ls *.zip && echo hello
ls: cannot access *.zip: No such file or directory
$ ls *.zip 2>/dev/null && echo hello
$ ls *.zip 2>/dev/null || true && echo hello
hello
$ touch x.zip
$ ls *.zip 2>/dev/null || true && echo hello
x.zip
hello
I attempted ls -R [existing file] and got an immediate error.
ls: cannot access 'existing file': No such file or directory
So, I used the following:
ls -R 2>dev/null | grep -i [existing file]*
ls -R 2>dev/null | grep -i text*
Or, in your case:
ls -R 2>dev/null | grep -i *.zip
My solution with a raspberry pi3 with buster.
ls -R 2>/dev/null | grep -i [existing file]*
2>/dev/null is very usefull with Bash script to avoid useless warnings or errors.
Do not forget slash caracter
I want to copy only 20180721 files from Outgoing to Incoming folder. I also want to remove the first numbers from the file name and want to rename from -1 to -3. I want to keep my commands to minimum so I am using pax command below.
Filename:
216118105741_MOM-09330-20180721_102408-1.jar
Output expected:
MOM-09330-20180721_102408-3.jar
I have tried this command and it's doing most of the work apart from removing the number coming in front of the file name. Can anyone help?
Command used:
pax -rw -pe -s/-1/-3/ ./*20180721*.jar ../Incoming/
Try this simple script using just parameter expansion:
for file in *20180721*.jar; do
new=${file#*_}
cp -- "$file" "/path/to/destination/${new%-*}-3.jar"
done
You can try this
In general
for i in `ls files-to-copy-*`; do
cp $i `echo $i | sed "s/rename-from/rename-to/g"`;
done;
In your case
for i in `ls *_MOM*`; do
cp $i `echo $i | sed "s/_MOM/MOM/g" | sed "s/-1/-3/g"`;
done;
pax only applies the first successful substitution even if the -s option is specified more than once. You can pipe the output to a second pax instance, though.
pax -w -s ':^[^_]*_::p' *20180721*.jar | (builtin cd ../Incoming; pax -r -s ':1[.]jar$:3.jar:p')