I'm trying to turn a command a run manually into a Makefile target, but I'm getting an error relating to my use of cat. I think I'm trying to open the file "cat" instead of cat'ing the actual file...
queries.sql : clean
ls ./sql/**/*.sql | sort -V | while read fn ; do (cat "${fn}"; echo; echo) >> queries.sql; done
clean :
rm -f queries.sql;
Running the ls command on its own works great, but with the Makefile I just get
cat: : No such file or directory
cat: : No such file or directory
cat: : No such file or directory
cat: : No such file or directory
cat: : No such file or directory
cat: : No such file or directory
...
I'm a rookie when it comes to this, so it's probably something simple. Also, I believe there is a way to do this without cat?
Edit:
Oh and the sql scripts are prefixed by dates i.e. 2020-01-02 so I'd like to keep the filenames sorted as seen by the ls -V.
This is probably the most often asked question about make these days.
The $ character is special to make, so if you want to pass along a $ to the shell you have to escape it, by using $$.
So cat "${fn}" must be written cat "$${fn}" in your recipe.
Related
Trying to write a bash script to copy a large number of files from an external drive into separate directories based on a subject id.
I've included the script I've written below.
I get the following error:
cat: /Volumes/Seagate: No such file or directory
cat: Backup: No such file or directory
cat: Plus: No such file or directory
cat: Drive/Subject_List.txt: No such file or directory
When I try to copy a single file at a time using the terminal, it copies using the exact command I've put in this script. I'm not sure why it's not recognizing the directory when I try and use it in the script below. Any help is greatly appreciated!
#!/bin/bash
#A bash script to copy the structural and functional files into the HCP_Entropy folder
#subject list
SUBJECT_LIST="/Volumes/Seagate/Backup/Plus/Drive/Subject_List.txt
for j in $(cat ${SUBJECT_LIST}); do
echo ${j}
cp /Volumes/Seagate\ Backup\ Plus\ Drive/HCP_DATA/Structural/{j}/unprocessed/3T/T1w_MPR1/${j}_3T_T1w_MPR1.nii.gz /Users/myname/Box/HCP_Entropy/BEN/${j}/anat
done
the line
$SUBJECT_LIST=/Volumes/Seagate\ Backup\ Plus\ Drive/Subject_List.txt
is bogus.
to assign values to a variable, you must not add the $ specifier.
a token starting with $ will be expanded, so $SUBJECT_LIST=... will first be expanded to =... (since you haven't assigned anything to the SUBJECT_LIST variable yet it is empty).
the proper way would be:
SUBJECT_LIST="/Volumes/Seagate Backup Plus Drive/Subject_List.txt"
(this uses quotes instead of escaping each space, which i find much more readable)
you also need to quote variables in case they contain spaces, else they might be interpreted by the command (cp) as multiple arguments.
for j in $(cat "${SUBJECT_LIST}"); do
# ...
done
and of course, you should check whether the source file actually exists, just like the destination directory.
indir="/Volumes/Seagate Backup Plus Drive"
SUBJECT_LIST="${indir}/Subject_List.txt"
cat "${SUBJECT_LIST}" | while read j; do
infile="${indir}/HCP_DATA/Structural/${j}/unprocessed/3T/T1w_MPR1/${j}_3T_T1w_MPR1.nii.gz"
outdir="/Users/myname/Box/HCP_Entropy/BEN/${j}/anat"
mkdir -p "${outdir}"
if [ -e "${infile}" ]; then
cp -v "${infile}" "${outdir}"
else
echo "missing file ${infile}" 1>&2
fi
done
First I create 3 files:
$ touch alpha bravo carlos
Then I want to save the list to a file:
$ ls > info.txt
However, I always got my info.txt inside:
$ cat info.txt
alpha
bravo
carlos
info.txt
It looks like the redirection operator creates my info.txt first.
In this case, my question is. How can I save my list of files before creating the info.txt first?
The main question is about the redirection operator. Why does it act first, and how to delay it so I complete my task first? Using the example above to answer it.
When you redirect a command's output to a file, the shell opens a file handle to the destination file, then runs the command in a child process whose standard output is connected to this file handle. There is no way to change this order, but you can redirect to a file in a different directory if you don't want the ls output to include the new file.
ls >/tmp/info.txt
mv /tmp/info.txt ./
In a production script, you should make sure that the file name is unique and unpredictable.
t=$(mktemp -t lstemp.XXXXXXXXXX) || exit
trap 'rm -f "$t"' INT HUP
ls >"$t"
mv "$t" ./info.txt
Alternatively, capture the output into a variable, and then write that variable to a file.
files=$(ls)
echo "$files" >info.txt
As an aside, probably don't use ls in scripts. If you want a list of files in the current directory
printf '%s\n' *
does that.
One simple approach is to save your command output to a variable, like this:
ls_output="$(ls)"
and then write the value of that variable to the file, using any of these commands:
printf '%s\n' "$ls_output" > info.txt
cat <<< "$ls_output" > info.txt
echo "$ls_output" > info.txt
Some caveats with this approach:
Bash variables can't contain null bytes. If the output of the command includes a null byte, that byte and everything after it will be discarded.
In the specific case of ls, though, this shouldn't be an issue, because the output of ls should never contain a null byte.
$(...) removes trailing newlines. The above compensates for this by adding a newline while creating info.txt, but if the the command output ends with multiple newlines, then the above will effectively collapse them into a single newline.
In the specific case of ls, this could happen if a filename ends with a newline — very unusual, and unlikely to be intentional, but nonetheless possible.
Since the above adds a newline while creating info.txt, it will put a newline there even if the command output doesn't end with a newline.
In the specific case of ls, this shouldn't be an issue, because the output of ls should always end with a newline.
If you want to avoid the above issues, another approach is to save your command output to a temporary file in a different directory, and then move it to the right place; for example:
tmpfile="$(mktemp)"
ls > "$tmpfile"
mv -- "$tmpfile" info.txt
. . . which obviously has different caveats (e.g., it requires access to write to a different directory), but should work on most systems.
One way to do what you want is to exclude the info.txt file from the ls output.
If you can rename the list file to .info.txt then it's as simple as:
ls >.info.txt
ls doesn't list files whose names start with . by default.
If you can't rename the list file but you've got GNU ls then you can use:
ls --ignore=info.txt >info.txt
Failing that, you can use:
ls | grep -v '^info\.txt$' >info.txt
All of the above options have the advantage that you can safely run them after the list file has been created.
Another general approach is to capture the output of ls with one command and save it to the list file with a second command. As others have pointed out, temporary files and shell variables are two specific ways to capture the output. Another way, if you've got the moreutils package installed, is to use the sponge utility:
ls | sponge info.txt
Finally, note that you may not be able to reliably extract the list of files from info.txt if it contains plain ls output. See ParsingLs - Greg's Wiki for more information.
I'm a beginner in the terminal and bash language, so please be gentle and answer thoroughly. :)
I'm using Cygwin terminal.
I'm using the file command, which returns the file type, like:
$ file myfile1
myfile1: HTML document, ASCII text
Now, I have a directory called test, and I want to check the type of all files in it.
My endeavors:
I checked in the man page for file (man file), and I could see in the examples that you could type the names of all files after the command and it gives the types of all, like:
$ file myfile{1,2,3}
myfile1: HTML document, ASCII text
myfile2: gzip compressed data
myfile3: HTML document, ASCII text
But my files' names are random, so there's no specific pattern to follow.
I tried using the for loop, which I think is going to be the answer, but this didn't work:
$ for f in ls; do file $f; done
ls: cannot open `ls' (No such file or directory)
$ for f in ./; do file $f; done
./: directory
Any ideas?
Every Unix or Linux shell supports some kind of globs. In your case, all you need is to use * glob. This magic symbol represents all folders and files in the given path.
eg., file directory/*
Shell will substitute the glob with all matching files and directories in the given path. The resulting command that will actually get executed might be something like:
file directory/foo directory/bar directory/baz
You can use a combination of the find and xargs command.
For example:
find /your/directory/ | xargs file
HTH
file directory/*
Is probably the shortest simplest solution to fix your issue, but this is more of an answer as to why your loops weren't working.
for f in ls; do file $f; done
ls: cannot open `ls' (No such file or directory)
For this loop it is saying "for f in the directory or file 'ls' ; do..." If you wanted it to execute the ls command then you would need to do something like this
for f in `ls`; do file "$f"; done
But that wouldn't work correctly if any of the filenames contain whitespace. It is safer and more efficient to use the shell's builtin "globbing" like this
for f in *; do file "$f"; done
For this one there's an easy fix.
for f in ./; do file $f; done
./: directory
Currently, you're asking it to run the file command for the directory "./".
By changing it to " ./* " meaning, everything within the current directory (which is the same thing as just *).
for f in ./*; do file "$f"; done
Remember, double quote variables to prevent globbing and word splitting.
https://github.com/koalaman/shellcheck/wiki/SC2086
I am very, very new to UNIX programming (running on MacOSX Mountain Lion via Terminal). I've been learning the basics from a bioinformatics and molecular methods course (we've had two classes) where we will eventually be using perl and python for data management purposes. Anyway, we have been tasked with writing a shell script to take data from a group of files and write it to a new file in a format that can be read by a specific program (Migrate-N).
I have gotten a number of functions to do exactly what I need independently when I type them into the command line, but when I put them all together in a script and try to run it I get an error. Here are the details (I apologize for the length):
#! /bin/bash
grep -f Samples.NFCup.txt locus1.fasta > locus1.NFCup.txt
grep -f Samples.NFCup.txt locus2.fasta > locus2.NFCup.txt
grep -f Samples.NFCup.txt locus3.fasta > locus3.NFCup.txt
grep -f Samples.NFCup.txt locus4.fasta > locus4.NFCup.txt
grep -f Samples.NFCup.txt locus5.fasta > locus5.NFCup.txt
grep -f Samples.Salmon.txt locus1.fasta > locus1.Salmon.txt
grep -f Samples.Salmon.txt locus2.fasta > locus2.Salmon.txt
grep -f Samples.Salmon.txt locus3.fasta > locus3.Salmon.txt
grep -f Samples.Salmon.txt locus4.fasta > locus4.Salmon.txt
grep -f Samples.Salmon.txt locus5.fasta > locus5.Salmon.txt
grep -f Samples.Cascades.txt locus1.fasta > locus1.Cascades.txt
grep -f Samples.Cascades.txt locus2.fasta > locus2.Cascades.txt
grep -f Samples.Cascades.txt locus3.fasta > locus3.Cascades.txt
grep -f Samples.Cascades.txt locus4.fasta > locus4.Cascades.txt
grep -f Samples.Cascades.txt locus5.fasta > locus5.Cascades.txt
echo 3 5 Salex_melanopsis > Smelanopsis.mig
echo 656 708 847 1159 779 >> Smelanopsis.mig
echo 154 124 120 74 126 NFCup >> Smelanopsis.mig
cat locus1.NFCup.txt locus2.NFCup.txt locus3.NFCup.txt locus4.NFCup.txt locus5.NFCup.txt >> Smelanopsis.mig
echo 32 30 30 18 38 Salmon River >> Smelanopsis.mig
cat locus1.Salmon.txt locus2.Salmon.txt locus3.Salmon.txt locus4.Salmon.txt locus5.Salmon.txt >> Smelanopsis.mig
echo 56 52 24 29 48 Cascades >> Smelanopsis.mig
cat locus1.Cascades.txt locus2.Cascades.txt locus3.Cascades.txt locus4.Cascades.txt locus5.Cascades.txt >> Smelanopsis.mig
The series of greps are just pulling out DNA sequence data for each site for each locus into new text files. The Samples...txt files have the sample ID numbers for a site, the .fasta files have the sequence information organized by sample ID; the grepping works just fine in command line if I run it individually.
The second group of code creates the actual new file I need to end up with, that ends in .mig. The echo lines are data about counts (basepairs per locus, populations in the analysis, samples per site, etc.) that the program needs information on. The cat lines are to mash together the locus by site data created by all the grepping below the site-specific information dictated in the echo line. You no doubt get the picture.
For creating the shell script I've been starting in Excel so I can easily copy-paste/autofill cells, saving as tab-delimited text, then opening that text file in TextWrangler to remove the tabs before saving as a .sh file (Line breaks: Unix (LF) and Encoding: Unicode (UTF-8)) in the same directory as all the files used in the script. I've tried using chmod +x FILENAME.sh and chmod u+x FILENAME.sh to try to make sure it is executable, but to no avail. Even if I cut the script down to just a single grep line (with the #! /bin/bash first line) I can't get it to work. The process only takes a moment when I type it directly into the command line as none of these files are larger than 160KB and some are significantly smaller. This is what I type in and what I get when I try to run the file (HW is the correct directory)
localhost:HW Mirel$ MigrateNshell.sh
-bash: MigrateNshell.sh: command not found
I've been at this impass for two days now, so any input would be greatly appreciated! Thanks!!
For security reasons, the shell will not search the current directory (by default) for an executable. You have to be specific, and tell bash that your script is in the current directory (.):
$ ./MigrateNshell.sh
Change the first line to the following as pointed out by Marc B
#!/bin/bash
Then mark the script as executable and execute it from the command line
chmod +x MigrateNshell.sh
./MigrateNshell.sh
or simply execute bash from the command line passing in your script as a parameter
/bin/bash MigrateNshell.sh
Make sure you are not using "PATH" as a variable, which will override the existing PATH for environment variables.
Also try to dos2unix the shell script, because sometimes it has Windows line endings and the shell does not recognize it.
$ dos2unix MigrateNshell.sh
This helps sometimes.
#! /bin/bash
^---
remove the indicated space. The shebang should be
#!/bin/bash
Unix has a variable called PATH that is a list of directories where to find commands.
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Users/david/bin
If I type a command foo at the command line, my shell will first see if there's an executable command /usr/local/bin/foo. If there is, it will execute /usr/local/bin/foo. If not, it will see if there's an executable command /usr/bin/foo and if not there, it will look to see if /bin/foo exists, etc. until it gets to /Users/david/bin/foo.
If it can't find a command foo in any of those directories, it tell me command not found.
There are several ways I can handle this issue:
Use the commandbash foo since foo is a shell script.
Include the directory name when you eecute the command like /Users/david/foo or $PWD/foo or just plain ./foo.
Change your $PATH variable to add the directory that contains your commands to the PATH.
You can modify $HOME/.bash_profile or $HOME/.profile if .bash_profile doesn't exist. I did that to add in /usr/local/bin which I placed first in my path. This way, I can override the standard commands that are in the OS. For example, I have Ant 1.9.1, but the Mac came with Ant 1.8.4. I put my ant command in /usr/local/bin, so my version of antwill execute first. I also added $HOME/bin to the end of the PATH for my own commands. If I had a file like the one you want to execute, I'll place it in $HOME/bin to execute it.
Try chmod u+x MigrateNshell.sh
There have been a few good comments about adding the shebang line to the beginning of the script. I'd like to add a recommendation to use the env command as well, for additional portability.
While #!/bin/bash may be the correct location on your system, that's not universal. Additionally, that may not be the user's preferred bash. #!/usr/bin/env bash will select the first bash found in the path.
Also make sure /bin/bash is the proper location for bash .... if you took that line from an example somewhere it may not match your particular server. If you are specifying an invalid location for bash you're going to have a problem.
Add below lines in your .profile path
PATH=$PATH:$HOME/bin:$Dir_where_script_exists
export PATH
Now your script should work without ./
Raj Dagla
I'm new to shell scripting too, but I had this same issue. Make sure at the end of your script you have a blank line. Otherwise it won't work.
First:
chmod 777 ./MigrateNshell.sh
Then:
./MigrateNshell.sh
Or, add your program to a directory recognized in your $PATH variable. Example: Path Variable Example
Which will then allow you to call your program without ./
Run a recursive listing of all the
files in /var/log and redirect
standard output to a file called
lsout.txt in your home directory.
Complete this question WITHOUT leaving
your home directory.
An: ls -R /var/log/ >
/home/bqiu/lsout.txt
I reckon the above bash command is not correct. Because I found what it stores was :
$ ls -R /var/log
/var/log:
empty.txt setup.log setup.log.full tmp
/var/log/tmp:
fake.txt subfolder
/var/log/tmp/subfolder:
Does that mean problem resolved?
I reckon NOT.
Because it contains more "stuff" than "only files"
Or at least, if the purpose was to locate all "files" underneath the "/var/log" directory
recursively, then I hope to get the anwser like this:
/var/log/empty.txt
/var/log/setup.log
/var/log/setup.log.full
/var/log/tmp/fake.txt
So then someone can parse the content of the output for later use. Such like
$ perl -wnle 'print "$. :" , $_;' logfiles
1 :/var/log/empty.txt
2 :/var/log/setup.log
3 :/var/log/setup.log.full
4 :/var/log/tmp/fake.txt
This is what I've got so far:
$ ls -1R
.:
cal.sh
cokemachine.sh
dir
sort
test.sh
./dir:
afile.txt
file
subdir
./dir/subdir:
$ ls -R | sed s/^.*://g
cal.sh
cokemachine.sh
dir
sort
test.sh
afile.txt
file
subdir
But this still leaves all directory/sub-directory names (dir and subdir), plus a couple of empty newlines
How could I get the correct result without using Perl or awk? Preferably using only basic bash commands(this is just because Perl and awk is out of assessment scope)
Edited : I focused on my own "$HOME" folder just to restrict the file listed. I am having little content in my homedir
Edited 2nd: Sorry about my inapproprated question in the initial form. I fixed the wording and hopefully everyone can see the problem now.
Try -
find /var/log > ~/lsout.txt
If you were given no restrictions in terms of which commands can or cannot be used, ls -R /var/log >~/lsout.txt or find /var/log -print >"$HOME/lsout.txt" or any similar combination will work just fine.
However, if the point of the assignment is to write a 100% sh-based implementation, without using ls -R, find, etc. then you should be producing something along the lines of:
#!/bin/sh
# Helper method which recursively lists the contents of a given directory
# Usage: recurse_ls target_directory
recurse_ls()
{
TARGET_DIR="$1"
# list contents of $TARGET_DIR
...
# - recursive call to list contents of sub-directories
recurse_ls ...
...
}
# MAIN
# Usage: script.sh target_directory
# - check that parameters to script.sh are correct
...
# - list the contents of target_dir and its subdirectories
recurse_ls "$1"
Useful links:
variable expansion and parameter substitution
file type test operations
globbing (wildcard expansion)
quoting to account for blanks in variable values (including filenames)
I'd guess that the answer they want is:
ls -R /var/log/ > /home/bqiu/lsout.txt
ie. the original answer you said was wrong.
Except you may want to write it as:
ls -R /var/log/ > ~/lsout.txt.
That way it outputs to the home directory of whoever is logged in, rather than just user "bqiu".
When it says: Run a recursive listing of all the files
To me ls stands for listing and the -R option stands for recursive.
So to me the wording of the question suggests using ls -R to produce the listing,
But a it depends upon what format they want the listing.