This question already has answers here:
Capturing output of find . -print0 into a bash array
(13 answers)
Closed 7 years ago.
I am currently a bash script that shall check some data. What I got so far is:
!/bin/bash
#!/bin/bash
find "./" -mindepth 1 -maxdepth 1 -type d -print0 | while IFS= read -r -d '' file; do
folder=${file##*/}
echo "Checking ${folder} for sanity..."
./makeconfig ${folder} | while read -r line; do
title=`echo $line | awk -F' ' '{print $2}'`
echo $title
done
done
Now what it currently does is: Search every directory in ./ and extract the folders name (thus: removing the ./ from the result of find). Then give it to a self-written tool, which will output some lines like this:
-t 1 -a 2
-t 3 -a 5
-t 7 -a 7
-t 9 -a 8
of which I gather the value behind -t via awk. This also works so far, the problem is, the outer while loop stops after the first iteration, thus checking only one folder. My guess is that the two read commands of the inner and outer loop are colliding somehow. The tool makeconfig definitiveley returns 0 (no error) always. I tried to debug it using sh -x script.sh but it does not show me anything I can deal with.
Can someone point me in the right direction here what is going wrong? If you need ANY further informations, I can give them to you. Ive written a quick mimicking program if you want to test the bash script here (also a script now, just echoing some stuff), just make it executable via chmod +x:
echo "-t 3 -a 4"
echo "-t 6 -a 1"
echo "-t 9 -a 5"
Just put this with the script in a folder and create some subfolders, that should do it to make it work (as much as it does).
Thanks in advance!
EDIT: This is NOT a duplicate as mentioned. The problem here are more the nested read commands than the print0 (maybe that has also something to do with it, but not entirely).
IFS= isn't setting the field separator to the null string (\0), but unsetting it entirely, so the entire output of the find command is being read at once. If you run it without the -print0 argument to find it'll be easier to work with in bash. Two other alternatives:
use xargs to run a shell script on each item found with that being the sole argument
use -exec to run the shell script on each item.
Related
This question already has an answer here:
How to avoid printing an error in the console in a Bash script when executing a command?
(1 answer)
Closed 21 days ago.
This post was edited and submitted for review 20 days ago.
I am not able to redirect an expected error to &>/dev/null in the following simple code.
xml=`ls ./XML_30fps/*.xml ./XML_24fps/*xml`
The expected error is due to the fact that one of the folders could be empty and so the error would be "No such file or directory." I don't want this error to show up to the users.
I could resolve this by breaking down this line of code but I was wondering if there was a simple way to redirect to null with a single line of code in such case. Neither of these work:
xml=`ls ./XML_30fps/*.xml ./XML_24fps/*xml` &>/dev/null
xml=`ls ./XML_30fps/*.xml ./XML_24fps/*xml &>dev/null`
This link How to avoid printing an error in the console in a BASH script when executing a command? kind of touch upon this but it is not as clear as my question and the answer given here.
Redirect it within the subshell:
xml=`exec 2>/dev/null; ls ./XML_30fps/*.xml ./XML_24fps/*xml`
Or
xml=$(exec 2>/dev/null; ls ./XML_30fps/*.xml ./XML_24fps/*xml)
How about substituting your command with an alternative that doesn't write to stderr, e.g.
xml=()
if [ -d XML_24fps ]; then
xml+=($(find XML_24fps -maxdepth 1 -type f -name '*.xml'))
fi
if [ -d XML_30fps ]; then
xml+=($(find XML_30fps -maxdepth 1 -type f -name '*.xml'))
fi
echo ${xml[#]}
In the above, we're using find to locate all *.xml files in a subfolder. We put a guard condition so that we do not run find on folders that do not exist.
By noting that XML_24fps and XML_30fps are very similar with the difference being just the 24 and 30 we can refactor the above with a {24,30} expansion as follows:
xml=()
for d in XML_{24,30}fps
do
if [ -d $d ]; then
xml+=($(find $d -maxdepth 1 -type f -name '*.xml'))
fi
done
echo ${xml[#]}
This question already has an answer here:
How do I expand commands within a bash alias?
(1 answer)
Closed 4 years ago.
I've been using cd "$(\ls -1dt ./*/ | head -n 1)" in some scripts to get into a new directory after creating it. I decided to put an alias in my bash_profile:
alias newest="cd $(\ls -1dt ./*/ | head -n 1)"
But when I run newest from the command line, it goes to a different directory, which happens to be the first one alphabetically though I don't know if that's why it's choosing that directory.
Pasting cd "$(\ls -1dt ./*/ | head -n 1)" directly into the command line works correctly. What's going on here?
Don't use ls -t in scripts at all -- see ParsingLs on why it's unreliable, and BashFAQ #3 on what to do instead. But ignoring that, the smallest fix for the immediate, narrow issue is to use a function:
newest() { cd "$(command ls -1dt ./*/ | head -n 1)"; }
Your alias was having the command substitution run at time of definition, not on invocation. If you really want it to still be an alias, you could use single quotes on the outside to prevent that command substitution from happening early:
alias newest='cd "$(\ls -1dt ./*/ | head -n 1)"'
What would a reliable, best-practice approach look like? Perhaps:
cdNewest() {
local latest='' candidate
set -- */
[[ -d $1 ]] || return # handle case where no directories exist so glob did not expand
latest=$1; shift
for candidate; do
[[ $candidate -nt $latest ]] && latest=$candidate
done
cd -- "$latest"
}
...which, instead of running two external commands (ls and head), runs none at all (and also avoids the need for command substitutions and pipelines, both of which are quite high-overhead, altogether).
I'm trying to write a script that will go through all of the directories within a directory where it will query a specific sequence against a local blast database. I've run the BLAST search without the bash for loop and used a for loop to create the databases in the first place. I have tried everything suggested by others having this problem (where applicable) to no avail. I'm not copying and pasting anything, I've retyped the script and looked for stupid errors (of which I make plenty). Maybe I'm just not seeing it? Anyway here's the code:
SRV01:~$ for d in ~/data/Shen_transcriptomes/transcriptomes/*/; do tblastn -query ~/data/chitin_binding_protein/cbp_Tectaria_macrodonta.fa -db "$d"*BLASTdb* -out "$(basename "$d")".out; done
When I run the same thing with: echo "$d"*BLASTdb*, it returns the correct files. So the for loop seems to be working but the above script returns:
Error: Too many positional arguments (1), the offending value: /home/dwickell/data/Shen_transcriptomes/transcriptomes/Acrostichum_aureum_RS90/RS_90_BLASTdb.nin
for every BLASTdb file in the directory.
-edit-
So this works, but I don't know enough about bash to understand why:
SRV01:~/data$ for d in /home/dwickell/data/Shen_transcriptome/transcriptomes/*/*.nin; do
name=$(echo "$d" | cut -f 1 -d '.')
blastn -query ./chitin_binding_protein/cbp_Tectaria_macrodonta.fa -db "$name" -outfmt 6 -out RS_103_tblastn.out; done
I'm betting you have a directory with more than one matching BLAST file. Try this test:
for d in ~/data/Shen_transcriptomes/transcriptomes/*/; do
echo "For directory $d have:"
ls -1 "$d"*BLASTdb*
echo
done
Okay so as I mentioned in the edit to my question above. I seem to have found a solution:
for d in /home/dwickell/data/Shen_transcriptomes/transcriptomes/*/*.nin; do
name=$(echo "$d" | cut -f 1 -d '.')
blastn -query ./chitin_binding_protein/cbp_Tectaria_macrodonta.fa -db "$name" -outfmt 6 -out "$(basename "$d" .nin)".out; done
I'm not clear on why this works but it does. Perhaps it has something to do with the trailing asterisk in my earlier attempt? If anyone can clarify please do! However for my own purposes I consider this solved.
Thanks to everyone for commenting.
I have written a code but I am having a problem to make the double loop in my bash script. This script should read all the files 1 by 1 in the given directory to upload but the value of "XYZ" changes for each file. Is there a way for me to make the code ask me to enter the "XYZ" every time it reads a new file to upload? (if possible with the name of the file read) like "please enter the XYZ value of 'read file's name'" I could not think of any possible ways of doing so. I also have the XYZ values listed in a file in a different directory so maybe can it be called like the do loop I did for the path? I might actually need to use both cases as well...
#!/bin/bash
FILES=/home/user/downloads/files/
for f in $FILES
do
curl -F pitch=9 -F Name='astn' -F
"path=#/home/user/downloads/files;$f" -F "pass 1234" -F "XYZ= 1.2" -
F time=30 -F outputFormat=json
"http://blablabla.com"
done
try following once.
#!/bin/bash
FILES=/home/user/downloads/files/
for f in $FILES
do
echo "Please enter the name variable value here:"
read Name
curl -F pitch=9 -F "$Name" -F
"path=#/home/user/downloads/files;$f" -F "pass 1234" -F "XYZ= 1.2" -
F time=30 -F outputFormat=json
"http://blablabla.com"
done
I have entered a read command inside loop so each time it will prompt user for a value, since you haven't provided more details about your requirement so I haven't tested it completely.
The problem was actually the argument. By changing it to:
-F Name="$Name"
solved the problem. Trying to link the argument such as only $Name or "$Name" causes a bad reception.
I need to use 'last' to search through a list of users who logged into a system, i.e.
last -f /var/log/wtmp <username>
Considering the number of bzipped archive files in that directory, and considering I am on a shared system, I am trying to include an inline bzcat, but nothing seems to work. I have tried the following combinations with no success:
last -f <"$(bzcat /var/log/wtmp-*)"
last -f <$(bzcat /var/log/wtmp-*)
bzcat /var/log/wtmp-* | last -f -
Driving me bonkers. Any input would be great!
last (assuming the Linux version) can't read from a pipe. You'll need to temporarily bunzip2 the files to read them.
tempfile=`mktemp` || exit 1
for wtmp in /var/log/wtmp-*; do
bzcat "$wtmp" > "$tempfile"
last -f "$tempfile"
done
rm -f "$tempfile"
You can only use < I/O redirection on one file at a time.
If anything is going to work, then the last line of your examples is it, but does last recognize - as meaning standard input? (Comments in another answer indicate "No, last does not recognize -". Now you see why it is important to follow all the conventions - it makes life difficult when you don't.) Failing that, you'll have to do it the classic way with a shell loop.
for file in /var/log/wtmp-*
do
last -f <(bzcat "$file")
done
Well, using process substitution like that is pure Bash...the classic way would be more like:
tmp=/tmp/xx.$$ # Or use mktemp
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
for file in /var/log/wtmp-*
do
bzcat $file > $tmp
last -f $tmp
done
rm -f $tmp
trap 0