I ran across the following code in a bash script.
# See if bsdtar can recognize the file
if bsdtar -tf "$file" -q '*' &>/dev/null; then
cmd="bsdtar"
else
continue
what did the '-q' option mean? I did not find any information in the help message of bsdtar command.
Thank you!
From the bsdtar man page:
-q (--fast-read)
(x and t mode only) Extract or list only the first archive entry
that matches each pattern or filename operand. Exit as soon as
each specified pattern or filename has been matched. By default,
the archive is always read to the very end, since there can be
multiple entries with the same name and, by convention, later
entries overwrite earlier entries. This option is provided as a
performance optimization.
Related
I have a file ('list') which contains a large list of filenames, with all kinds of character combinations such as:
-sensei
I am using the following script to process this list of files:
#!/bin/bash
while read -r line
do
html2text -o ./text/$line $line
done < list
Which is giving me 'Cannot open input file' errors.
What is the correct way of dealing with these filenames, to prevent any errors?
I have changed the example list above to now include only one filename (out of many) which does not work, no matter how I quote or don't quote it.
#!/bin/bash
while read -r line
do
html2text -o "./text/$line" "$line"
done < list
The error I get is:
Unrecognized command line option "-sensei", try "-help".
As such this question does not resolve this issue.
Something like this should fix your issues (unless the file list has CRLF line endings):
while IFS='' read -r file
do
html2text -o ./text/"$file" -- "$file"
done < filelist.txt
notes:
IFS='' read -r is mandatory when you want to capture a line accurately
most commands support -- for signaling the end of options; whatever the following arguments might be, they will not be treated as options. BTW, an other common work-around for filenames that start with - is to prepend ./ to them.
Are you sure the files are present? Usually there should not be a problem with your script. For example:
#!/bin/bash
while read -r line
do
touch ./text/$line
done < $1
ls -l /tmp/text
works perfectly fine for me (with your example input). Bash should escape them for you. Are you in the right directory? If you are sure the files are present, there is a problem with html2text.
Also make sure not to have a trailing blank line in your input file.
Ive got a .sql job which creates files depending on certain criteria, it writes these with a prefix of TEMP_ as we then have an adaptor that picks up the files and we dont want them picked up before writing is complete.
I need to put a post job in place which renames these files, i have it set up with a number of other job but they all create the files each time they run. This job only creates the files sometimes it runs, depending on the data in the system.
I need to do a check if the file exists and then exit if no file exists.
Ive found a few examples but they all seem to fail, this is where i have got to which i thought was checking if no file, if no file then exit but it fails and displays:
"syntax error at line 16: `TEMP_SUBCON*.csv' unexpected"
This is what i have currently with line 16 being the first line - Above that is just comments:
if [[ ! -f $OUT_DIR -name TEMP_SUBCON*.csv ]] ; then
exit $?
fi
TEMP_DATA_FILE=$(find $OUT_DIR -name TEMP_SUBCON_QTY_output*.csv)
DATA_FILE=$(basename $TEMP_DATA_FILE | cut -c6-)
echo $TEMP_DATA_FILE
echo $DATA_FILE
## Rename the file name, remove the phrase TEMP_, so that EAI can pick the file ##
mv $TEMP_DATA_FILE $OUT_DIR/$DATA_FILE
Can you help guide what ive done incorrectly?
Thanks
If I understand it right, you want to find the files with TEMP_ prefix in your $OUT_DIR, and then if any rename them without the prefix. Then that should do the trick
for file in $OUT_DIR/TEMP_SUBCON_*.txt; do
if [[ -e $file ]]; then
mv $file $OUT_DIR/${file#*SUBCON_}
fi
done
exit
It will go through the directory finding each TEMP_ file and rename them without it. If there is none, it won't do anything.
That syntax is not valid for the [[ ... ]] test.
Why not use the result of the subsequent find command to check if there were any matching files in the specified directory instead, and quit if no files are returned (in other words, quit if the result variable is empty)?
Example:
TEMP_DATA_FILE=$(find $OUT_DIR -name "TEMP_SUBCON_QTY_output*.csv" )
if [[ -z ${TEMP_DATA_FILE:=""} ]] ; then
exit 1
fi
Note 1: you should quote the pattern argument for the find command as shown.
Note 2: it is useful to use set -u in your ksh scripts to cause ksh to abort if variables are unitialized when used (often the cause of errors) , instead of using a default value. However, if you use use set -u then in any test you should explicitly give your own default value. That is the reason for using ${TEMP_DATA_FILE:=""} instead of ${TEMP_DATA_FILE} - to support the often very useful set -u option. Even when you do not use set -u the ${TEMP_DATA_FILE:=""} inside tests makes it explicit what should happen instead of relying on implicit behaviour.
Note 3: you should use set -x when debugging and study every line of the output, it will show you exactly what commands ran with which arguments and what was the result. This helps you to learn how to code in ksh and similar shells.
I am working with a program that combines individuals files, and I am incorporating this program into a BASH pipeline that I'm putting together. The program requires a flag for each file, like so:
program -V file_1.g.vcf -V file_2.g.vcf -V file_3.g.vcf -O combined_output.g.vcf
In order to allow the script to work with any number of samples, I would like to read the individual files names within a directory, and expand the path for each file after a '-V' flag.
I have tried adding the file paths to a variable with the following, but have not had success with proper expansion:
GVCFS=('-V' `ls gvcfs/*.g.vcf`)
Any help is greatly appreciated!
You can do this by using a loop to populate an array with the options:
options=()
for file in gvcfs/*.g.vcf; do # Don't parse ls, just use a direct wildcard expression
options+=(-V "${file##*/}") # If you want the full path, leave off ##*/
done
program "${options[#]}" -O combined_output.g.vcf
printf can help:
options=( $(printf -- "-V %s " gvcfs/*.g.vcf ) )
Though this will not deal gracefully with whitespace in filenames.
Also consider realpath to generate absolute filenames.
Terminal beginner here. I was reading through a tutorial and encountered the following command:
rm -f src/*
For my own edification, I want to know what -f does.
However, when I enter in man -f, I get the error response What manual page do you want? and when I run man f, I get the response No manual entry for f.
What's the correct way to get the definition of -f in this context from the terminal?
-f is parameter of the rm program. It doesn't have same meaning for all programs so you have to look manual page of the program. man rm in your case and it says:
f, --force
ignore nonexistent files and arguments, never prompt
For instance in tail, -f parameters means follow (output appended data as the file grows) You can learn that from tail's manual page which is man tail
-f in this context is a flag you add to rm. You'll see it documented under man rm. The relevant part of the output will show that
-f, --force
ignore nonexistent files and arguments, never prompt
I need to create a set of regex patterns to be used within the --remove option of the lcov command, in order to remove some pointless entries in the coverage file.
Reading the manpage of lcov it seems the list of patterns shall be handed to it as a space-separated single-quoted strings (as in 'patte*n1' 'pa*ter*2' 'p*3' and so on).
I was able to write a small bash script which generates exactly the list I need, in the form required by the command. If I issue
export LIST=$( ./myscript.sh )
and then, do a echo $LIST, I get the list I expect to
'path/file1' 'path/file2' 'path/file3'
(the regex patterns list is comprised of the ending part of some patterns to be removed from the analysis).
The problem arises when I pass this list to the command:
lcov --remove coverage_report.info '/usr/*' $( echo $LIST ) --output-file coverage_report.info.cleaned
in order to remove both /usr/* and the files from my list, but it does not work as expected: no path from my path list is actually removed. I tried different forms of it:
echo $LIST | xargs -i lcov --remove coverage_report.info {} --output-file coverage_report.info.cleaned
If I take the output of echo $LIST and copy/paste it directly on the command line, the command actually works and removes all the path I'd like to get rid of. My impression is that I'm not aware of all inner aspects of some commands' options processing and the order of evaluation of nested commands.
Thanks in advance to any people willing to help!
Ok,
I finally got it done by issuing
echo $LIST | xargs lcov --output-file coverage_report.info.cleaned --remove coverage_report.info
I switched to the echo ... | xargs way of doing things in order to force the evaluation order of the commands, but the -i way of doing that was not appropriate. I had the idea that the -i switch purpose was to replace the output of the previous command in every place of the current one by means of the {} token, but apparently it has some more consequences on the output processing.
With lcov 1.12 and bash 4.3.46(1), the suggestion above did not work for me. The list needed to be expanded as follows:
echo ${LIST[#]} | xargs lcov -o coverage_report.info.cleaned -r coverage_report.info
Moreover, my list was a list of double-quoted strings, e.g.
LIST=( '"path/file1"' '"path/file2"' '"path/file3"' )