global alias for last file in current directory - shell

I often want to perform a function on the most recent file in the current directory. Essentially I want a more general version of a method to open last modified file in the directory using vi.
I am able to write a global alias in zsh that does part of what I need:
alias -g lafi='`ls -rt|tail -n 1`'
Now I can execute something like
cat lafi
and I will see the content of the most recent file in the current dir. Or I can issue echo lafi to figure out what the last file was (or I could even say ls -rt|tail -n 1).
Is there a way to modify the alias definition such that it outputs the last file (to STDERR?) and then hands it on like lafi above for further consumption in the commandline?. So for the above cat lafi I would hope for this output.
last file: <name of last-file>
<content of last-file>
I suspect this involves tee but my shell kung fu doesn't cover this in sufficient detail.

Perhaps
alias -g lafi='`ls -rt | tail -n 1 | tee >({ printf "last file: "; cat; } >&2)`'
I think zsh has process substitutions like that.

Related

How to delay `redirection operator` of BASH `>`

First I create 3 files:
$ touch alpha bravo carlos
Then I want to save the list to a file:
$ ls > info.txt
However, I always got my info.txt inside:
$ cat info.txt
alpha
bravo
carlos
info.txt
It looks like the redirection operator creates my info.txt first.
In this case, my question is. How can I save my list of files before creating the info.txt first?
The main question is about the redirection operator. Why does it act first, and how to delay it so I complete my task first? Using the example above to answer it.
When you redirect a command's output to a file, the shell opens a file handle to the destination file, then runs the command in a child process whose standard output is connected to this file handle. There is no way to change this order, but you can redirect to a file in a different directory if you don't want the ls output to include the new file.
ls >/tmp/info.txt
mv /tmp/info.txt ./
In a production script, you should make sure that the file name is unique and unpredictable.
t=$(mktemp -t lstemp.XXXXXXXXXX) || exit
trap 'rm -f "$t"' INT HUP
ls >"$t"
mv "$t" ./info.txt
Alternatively, capture the output into a variable, and then write that variable to a file.
files=$(ls)
echo "$files" >info.txt
As an aside, probably don't use ls in scripts. If you want a list of files in the current directory
printf '%s\n' *
does that.
One simple approach is to save your command output to a variable, like this:
ls_output="$(ls)"
and then write the value of that variable to the file, using any of these commands:
printf '%s\n' "$ls_output" > info.txt
cat <<< "$ls_output" > info.txt
echo "$ls_output" > info.txt
Some caveats with this approach:
Bash variables can't contain null bytes. If the output of the command includes a null byte, that byte and everything after it will be discarded.
In the specific case of ls, though, this shouldn't be an issue, because the output of ls should never contain a null byte.
$(...) removes trailing newlines. The above compensates for this by adding a newline while creating info.txt, but if the the command output ends with multiple newlines, then the above will effectively collapse them into a single newline.
In the specific case of ls, this could happen if a filename ends with a newline — very unusual, and unlikely to be intentional, but nonetheless possible.
Since the above adds a newline while creating info.txt, it will put a newline there even if the command output doesn't end with a newline.
In the specific case of ls, this shouldn't be an issue, because the output of ls should always end with a newline.
If you want to avoid the above issues, another approach is to save your command output to a temporary file in a different directory, and then move it to the right place; for example:
tmpfile="$(mktemp)"
ls > "$tmpfile"
mv -- "$tmpfile" info.txt
. . . which obviously has different caveats (e.g., it requires access to write to a different directory), but should work on most systems.
One way to do what you want is to exclude the info.txt file from the ls output.
If you can rename the list file to .info.txt then it's as simple as:
ls >.info.txt
ls doesn't list files whose names start with . by default.
If you can't rename the list file but you've got GNU ls then you can use:
ls --ignore=info.txt >info.txt
Failing that, you can use:
ls | grep -v '^info\.txt$' >info.txt
All of the above options have the advantage that you can safely run them after the list file has been created.
Another general approach is to capture the output of ls with one command and save it to the list file with a second command. As others have pointed out, temporary files and shell variables are two specific ways to capture the output. Another way, if you've got the moreutils package installed, is to use the sponge utility:
ls | sponge info.txt
Finally, note that you may not be able to reliably extract the list of files from info.txt if it contains plain ls output. See ParsingLs - Greg's Wiki for more information.

Get output filename in Bash Script

I would like to get just the filename (with extension) of the output file I pass to my bash script:
a=$1
b=$(basename -- "$a")
echo $b #for debug
if [ "$b" == "test" ]; then
echo $b
fi
If i type in:
./test.sh /home/oscarchase/test.sh > /home/oscarchase/test.txt
I would like to get:
test.txt
in my output file but I get:
test.sh
How can I procede to parse this first argument to get the right name ?
Try this:
#!/bin/bash
output=$(readlink /proc/$$/fd/1)
echo "output is performed to \"$output\""
but please remember that this solution is system-dependent (particularly for Linux). I'm not sure that /proc filesystem has the same structure in e.g. FreeBSD and certainly this script won't work in bash for Windows.
Ahha, FreeBSD obsoleted procfs a while ago and now has a different facility called procstat. You may get an idea on how to extract the information you need from the following screenshot. I guess some awk-ing is required :)
Finding out the name of the file that is opened on file descriptor 1 (standard output) is not something you can do directly in bash; it depends on what operating system you are using. You can use lsof and awk to do this; it doesn't rely on the proc file system, and although the exact call may vary, this command worked for both Linux and Mac OS X, so it is at least somewhat portable.
output=$( lsof -p $$ -a -d 1 -F n | awk '/^n/ {print substr($1, 2)}' )
Some explanation:
-p $$ selects open files for the current process
-d 1 selects only file descriptor 1
-a is use to require both -p and -d apply (the default is to show all files that match either condition
-F n modifies the output so that you get one line per field, prefixed with an identifier character. With this, you'll get two lines: one beginning with p and indicating the process ID, and one beginning with `n indicating the file name of the file.
The awk command simply selects the line starting with n and outputs the first field minus the initial n.

Output filename from input in bash

I have this script:
#!/bin/bash
FASTQFILES=~/Programs/ncbi-blast-2.2.29+/DB_files/*.fastq
FASTAFILES=~/Programs/ncbi-blast-2.2.29+/DB_files/*.fasta
clear
for file in $FASTQFILES
do cat $FASTQFILES | perl -e '$i=0;while(<>){if(/^\#/&&$i==0){s/^\#/\>/;print;}elsif($i==1){print;$i=-3}$i++;}' > ~/Programs/ncbi-blast-2.2.29+/DB_files/"${FASTQFILES%.*}.fasta"
mv $FASTAFILES ~/Programs/ncbi-blast-2.2.29+/db/
done
I'm trying it to grab the files defined in $FASTQFILES, do the .fastq to .fasta conversion, name the output with the same filename of the input, and move it to a new folder. E.g., ~/./DB_files/HELLO.fastq should give a converted ~/./db/HELLO.fasta
The problem is that the output of the conversion is a properly formatted hidden file called .fasta in the first folder instead of the expected one named HELLO.fasta. So there is nothing to mv. I think I'm messing up in the ${FASTQFILES%.*}.fasta argument but I can't seem to fix it.
I see three problems:
One part of your trouble is that you use cat $FASTQFILES instead of cat $file.
You also need to fix the I/O redirection at the end of that line to > ~/Programs/ncbi-blast-2.2.29+/DB_files/"${file%.fastq}.fasta".
The mv command needs to be executed outside the loop.
In fact, when processing a single file at a time, you don't need to use cat at all (UUOC — Useless Use Of Cat). Simply provide "$file" as an argument to the Perl script.

How to name output file according to a command line argument in a bash script?

These lines work when copy-pasted to the shell but don't work in a script:
ls -l file1 > /path/`echo !#:2`.txt
ls -l file2 > /path/`echo !#:2`.txt
 
ls -l file1 > /path/$(echo !#:2).txt
ls -l file2 > /path/$(echo !#:2).txt
What's the syntax for doing this in a bash script?
If possible, I would like to know how to do this for one file and for all files with the same extension in a folder.
Non-interactive shell has history expansion disabled.
Add the following two lines to your script to enable it:
set -o history
set -o histexpand
(UPDATE: I misunderstood the original question as referring to arguments to the script, not arguments to the current command within the script; this is a rewritten answer.)
As #choroba said, history is disabled by default in scripts, because it's not really the right way to do things like this in a script.
The preferred way to do things like this in a script is to store the item in question (in this case the filename) in a variable, then refer to it multiple times in the command:
fname=file1
ls -l "$fname" > "/path/$fname.txt"
Note that you should almost always put variable references inside double-quotes (as I did above) to avoid trouble if they contain spaces or other shell metacharacters. If you want to do this for multiple files, use a for loop:
for fname in *; do # this will repeat for each file (or directory) in the current directory
ls -l "$fname" > "/path/$fname.txt"
done
If you want to operate on files someplace other than the current directory, things are a little more complicated. You can use /inputpath/*, but it'll include the path along with each filename (e.g. it'd run the loop with "/inputpath/file1", "/inputpath/file2", etc), and if you use that directly in the output redirect you'll get something like > /path/inputpath/file1.txt (i.e. the two different paths will get appended together), probably not what you want. In this case, you can use the basename command to strip off the unwanted path for output purposes:
for fpath in /inputpath/*; do
ls -l "$fpath" > "/path/$(basename "$fpath").txt"
done
If you want a list of files with a particular extension, just use *.foo or /inputpath/*.foo as appropriate. However, in this case you'll wind up with the output going to files named e.g. "file1.foo.txt"; if you don't want stacked extensions, basename has an option to trim that as well:
for fpath in /inputpath/*.foo; do
ls -l "$fpath" > "/path/$(basename "$fpath" .foo).txt"
done
Finally, it might be neater (depending how complex the actual operation is, and whether it occurs multiple times in the script) to wrap this in a function, then use that:
doStuffWithFile() {
ls -l "$1" > "/path/$(basename "$1" "$2").txt"
}
for fpath in /inputpath/*.foo; do
doStuffWithFile "$fpath" ".foo"
done
doStuffWithFile /otherpath/otherfile.bar .bar

Shell script not running, command not found

I am very, very new to UNIX programming (running on MacOSX Mountain Lion via Terminal). I've been learning the basics from a bioinformatics and molecular methods course (we've had two classes) where we will eventually be using perl and python for data management purposes. Anyway, we have been tasked with writing a shell script to take data from a group of files and write it to a new file in a format that can be read by a specific program (Migrate-N).
I have gotten a number of functions to do exactly what I need independently when I type them into the command line, but when I put them all together in a script and try to run it I get an error. Here are the details (I apologize for the length):
#! /bin/bash
grep -f Samples.NFCup.txt locus1.fasta > locus1.NFCup.txt
grep -f Samples.NFCup.txt locus2.fasta > locus2.NFCup.txt
grep -f Samples.NFCup.txt locus3.fasta > locus3.NFCup.txt
grep -f Samples.NFCup.txt locus4.fasta > locus4.NFCup.txt
grep -f Samples.NFCup.txt locus5.fasta > locus5.NFCup.txt
grep -f Samples.Salmon.txt locus1.fasta > locus1.Salmon.txt
grep -f Samples.Salmon.txt locus2.fasta > locus2.Salmon.txt
grep -f Samples.Salmon.txt locus3.fasta > locus3.Salmon.txt
grep -f Samples.Salmon.txt locus4.fasta > locus4.Salmon.txt
grep -f Samples.Salmon.txt locus5.fasta > locus5.Salmon.txt
grep -f Samples.Cascades.txt locus1.fasta > locus1.Cascades.txt
grep -f Samples.Cascades.txt locus2.fasta > locus2.Cascades.txt
grep -f Samples.Cascades.txt locus3.fasta > locus3.Cascades.txt
grep -f Samples.Cascades.txt locus4.fasta > locus4.Cascades.txt
grep -f Samples.Cascades.txt locus5.fasta > locus5.Cascades.txt
echo 3 5 Salex_melanopsis > Smelanopsis.mig
echo 656 708 847 1159 779 >> Smelanopsis.mig
echo 154 124 120 74 126 NFCup >> Smelanopsis.mig
cat locus1.NFCup.txt locus2.NFCup.txt locus3.NFCup.txt locus4.NFCup.txt locus5.NFCup.txt >> Smelanopsis.mig
echo 32 30 30 18 38 Salmon River >> Smelanopsis.mig
cat locus1.Salmon.txt locus2.Salmon.txt locus3.Salmon.txt locus4.Salmon.txt locus5.Salmon.txt >> Smelanopsis.mig
echo 56 52 24 29 48 Cascades >> Smelanopsis.mig
cat locus1.Cascades.txt locus2.Cascades.txt locus3.Cascades.txt locus4.Cascades.txt locus5.Cascades.txt >> Smelanopsis.mig
The series of greps are just pulling out DNA sequence data for each site for each locus into new text files. The Samples...txt files have the sample ID numbers for a site, the .fasta files have the sequence information organized by sample ID; the grepping works just fine in command line if I run it individually.
The second group of code creates the actual new file I need to end up with, that ends in .mig. The echo lines are data about counts (basepairs per locus, populations in the analysis, samples per site, etc.) that the program needs information on. The cat lines are to mash together the locus by site data created by all the grepping below the site-specific information dictated in the echo line. You no doubt get the picture.
For creating the shell script I've been starting in Excel so I can easily copy-paste/autofill cells, saving as tab-delimited text, then opening that text file in TextWrangler to remove the tabs before saving as a .sh file (Line breaks: Unix (LF) and Encoding: Unicode (UTF-8)) in the same directory as all the files used in the script. I've tried using chmod +x FILENAME.sh and chmod u+x FILENAME.sh to try to make sure it is executable, but to no avail. Even if I cut the script down to just a single grep line (with the #! /bin/bash first line) I can't get it to work. The process only takes a moment when I type it directly into the command line as none of these files are larger than 160KB and some are significantly smaller. This is what I type in and what I get when I try to run the file (HW is the correct directory)
localhost:HW Mirel$ MigrateNshell.sh
-bash: MigrateNshell.sh: command not found
I've been at this impass for two days now, so any input would be greatly appreciated! Thanks!!
For security reasons, the shell will not search the current directory (by default) for an executable. You have to be specific, and tell bash that your script is in the current directory (.):
$ ./MigrateNshell.sh
Change the first line to the following as pointed out by Marc B
#!/bin/bash
Then mark the script as executable and execute it from the command line
chmod +x MigrateNshell.sh
./MigrateNshell.sh
or simply execute bash from the command line passing in your script as a parameter
/bin/bash MigrateNshell.sh
Make sure you are not using "PATH" as a variable, which will override the existing PATH for environment variables.
Also try to dos2unix the shell script, because sometimes it has Windows line endings and the shell does not recognize it.
$ dos2unix MigrateNshell.sh
This helps sometimes.
#! /bin/bash
^---
remove the indicated space. The shebang should be
#!/bin/bash
Unix has a variable called PATH that is a list of directories where to find commands.
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Users/david/bin
If I type a command foo at the command line, my shell will first see if there's an executable command /usr/local/bin/foo. If there is, it will execute /usr/local/bin/foo. If not, it will see if there's an executable command /usr/bin/foo and if not there, it will look to see if /bin/foo exists, etc. until it gets to /Users/david/bin/foo.
If it can't find a command foo in any of those directories, it tell me command not found.
There are several ways I can handle this issue:
Use the commandbash foo since foo is a shell script.
Include the directory name when you eecute the command like /Users/david/foo or $PWD/foo or just plain ./foo.
Change your $PATH variable to add the directory that contains your commands to the PATH.
You can modify $HOME/.bash_profile or $HOME/.profile if .bash_profile doesn't exist. I did that to add in /usr/local/bin which I placed first in my path. This way, I can override the standard commands that are in the OS. For example, I have Ant 1.9.1, but the Mac came with Ant 1.8.4. I put my ant command in /usr/local/bin, so my version of antwill execute first. I also added $HOME/bin to the end of the PATH for my own commands. If I had a file like the one you want to execute, I'll place it in $HOME/bin to execute it.
Try chmod u+x MigrateNshell.sh
There have been a few good comments about adding the shebang line to the beginning of the script. I'd like to add a recommendation to use the env command as well, for additional portability.
While #!/bin/bash may be the correct location on your system, that's not universal. Additionally, that may not be the user's preferred bash. #!/usr/bin/env bash will select the first bash found in the path.
Also make sure /bin/bash is the proper location for bash .... if you took that line from an example somewhere it may not match your particular server. If you are specifying an invalid location for bash you're going to have a problem.
Add below lines in your .profile path
PATH=$PATH:$HOME/bin:$Dir_where_script_exists
export PATH
Now your script should work without ./
Raj Dagla
I'm new to shell scripting too, but I had this same issue. Make sure at the end of your script you have a blank line. Otherwise it won't work.
First:
chmod 777 ./MigrateNshell.sh
Then:
./MigrateNshell.sh
Or, add your program to a directory recognized in your $PATH variable. Example: Path Variable Example
Which will then allow you to call your program without ./

Resources