Fast way to edit nth line of previous command output - bash

I often find find myself doing a workflow like this:
$ find . |grep somefile
./tmp/somefile.xml
./test/another-somefile.txt
(review output)
$ vim ./tmp/somefile.xml
Now, it would be neat if there was some convenient way of using the output of the find command and feed it to vim.
The best I've come up with is:
$ nth () { sed -n $1p; }
$ find . |grep somefile
./tmp/somefile.xml
./test/another-somefile.txt
(review output)
$ vim `!!|nth 2`
I was wondering if there are other, maybe prettier, ways of accomplishing the same thing?
To clarify, I want a convenient way of grabbing the nth line from a previously run command to quickly open that file for editing in vim, without having to cut & paste the filename with the mouse or tab-complete my way through the file path.

way 1: don't pass exact file to vim, but the whole output. choose the file in vim
currently you are working in two steps:
1 - launch the find/grep... cmd
2 - vim !!....
if you are sure that you want to use vim to open one (or more) file(s) from the find result. you may try:
find. (with grep if you like) |vim -
then you have the whole output in vim, now you can use vim magic to move cursor to the file you want to edit, then press gf. (I do this sometimes)
way 2: refine your regex in your find (or grep), to get the single file, that you want to edit.
this is not a hard thing at all. then you can just vim !!.
your nth() is nice. however imagine there are 30 lines in output, and your file sits in the line# 16. how do you count it? sure you can add |nl at the end, then you cannot directly use !! any longer..
just my 2 cents

Modified after your comment. Not sure if it's "convenient" though..
command | tail -n3 | head -n1 | xargs vim

Maybe this is what you're looking for?
find . -name "*somefile*" -exec vim -p {} \;

If you want an interactive review maybe you can use something like this:
TMP_LIST=""; for i in `find . | grep somefile`; do echo $i; read -p "(y/n)?"; [ $REPLY == "y" ] && TMP_LIST="$TMP_LIST $i"; done; vim $TMP_LIST

You almost did it!!
pearl.251> cat file1
a b c d e f pearl.252> find . -name "file*"
./file1
./file2
./file3
./file4
./file5
./file6
./file7
pearl.253> vi `!!|awk 'NR==1'`
the last line overe here will open the file1 in vi.

Related

Why am I getting some extra, weird characters when making a file from grep output?

I am doing a very basic command that never gave me trouble in the past, but is inexplicably returning undesired characters now.
I am in BASH on linux, and simply want to search through a directory and make a file containing filenames that match a pattern:
ls | grep "*.file_ID" > my_list.txt
...This works fine, and if I cat the data:
cat my_list.txt
seriesA.file_ID
seriesB.file_ID
seriesC.file_ID
However, when I try to feed this file into downstream processes, I keep getting a weird errors, as if the file isn't properly formatted as a list of file names. When I open the file in vim to reveal any unnecessary characters, I find the file actually looks like this:
vi my_list.txt
^[[00mseriesA.file_ID^[[00m
^[[00mseriesB.file_ID^[[00m
^[[00mseriesC.file_ID^[[00m
For some reason, every line is started and ended with the characters ^[[00m. If I delete these characters, all of the downstream processes work fine. However, I need to have my scripts automatically make such a file list, so I can't keep going in and manually deleting these chars.
Does anyone know what is producing the ^[[00m characters? I don't have any idea where they are coming from, and need a to be able to generate files without them.
Thanks!
Probably your GREP_OPTIONS environment variable contains --color=always, which causes the output to be stuffed with control characters, even when piped to a file.
Use --color=auto instead.
http://www.gnu.org/software/grep/manual/html_node/Environment-Variables.html
Even better, don't use grep:
ls *.file_ID > my_list.txt
usually it is automatically => grep --color=auto
if it pipes into a csv file that looks like this ^[[00...^[[00m
you would have to type this in the terminal:
grep --color=auto "your regex" > example.csv
if you want it to be a permanent situation where you do not have to type "--color=auto" every time, type this in the terminal:
export GREP_OPTIONS='--color=auto'
more info:
https://linuxcommando.blogspot.com/2007/10/grep-with-color-output.html
Don't use ls:
printf "%s\n" *.file_ID > my_list.txt
This should take care of it (assuming GNU find and no directory traversing):
find . -maxdepth 1 -type f -name "*.file_ID" -printf "%f\n" > my_list.txt
Example:
~> ls *file_ID*
a.file_ID b.file_ID c.file_ID
~> find . -maxdepth 1 -type f -name "*.file_ID" -printf "%f\n" > my_list.txt
~> cat my_list.txt
a.file_ID
b.file_ID
c.file_ID
As far as the "^[[00m" characters, check your ls options:
~> alias -p | grep "ls="
You may get something like:
alias ls='/bin/ls $LS_OPTIONS'
If so, check env for this:
~> env | grep LS_OP
LS_OPTIONS=-N --color=tty -T 0
The character string you're referencing is used to turn off colors, so your shell likely has been set to show colors. Removing and/or changing the ls alias should resolve it.
The weird characters such as ^[[00m are escape characters for colorizing the output. Color output for ls is most likely enabled through an alias in your environment.
To avoid getting these color characters, you can try disable the ls alias temporarily with a backslash:
\ls *.txt
Or you can use printf command instead.
printf "%s\n" *.txt

Changing file extensions for all files in a directory on OS X

I have a directory full of files with one extension (.txt in this case) that I want to automatically convert to another extension (.md).
Is there an easy terminal one-liner I can use to convert all of the files in this directory to a different file extension?
Or do I need to write a script with a regular expression?
You could use something like this:
for old in *.txt; do mv $old `basename $old .txt`.md; done
Make a copy first!
Alternatively, you could install the ren (rename) utility
brew install ren
ren '*.txt' '#1.md'
If you want to rename files with prefix or suffix in file names
ren 'prefix_*.txt' 'prefix_#1.md'
Terminal is not necessary for this... Just highlight all of the files you want to rename. Right click and select "Rename ## items" and just type ".txt" into to the "Find:" box and ".md" into the "Replace with:" box.
The preferred Unix way to do this (yes, OS X is based on Unix) is:
ls | sed 's/^\(.*\)\.txt$/mv "\1.txt" "\1.md"/' | sh
Why looping with for if ls by design loops through the whole list of filenames? You've got pipes, use them. You can create/modify not only output using commands, but also commands (right, that is commands created by a command, which is what Brian Kernighan, one of the inventors of Unix, liked most on Unix), so let's take a look what the ls and the sed produces by removing the pipe to sh:
$ ls | sed 's/^\(.*\)\.txt$/mv "\1.txt" "\1.md"/'
mv "firstfile.txt" "firstfile.md"
mv "second file.txt" "second file.md"
$
As you can see, it is not only an one-liner, but a complete script, which furthermore works by creating another script as output. So let's just feed the script produced by the one-liner script to sh, which is the script interpreter of OS X. Of course it works even for filenames with spaces in it.
BTW: Every time you type something in Terminal you create a script, even if it is only a single command with one word like ls or date etc. Everything running in a Unix shell is always a script/program, which is just some ASCII-based stream (in this case an instruction stream opposed to a data stream).
To see the actual commands being executed by sh, just add an -x option after sh, which turns on debugging output in the shell, so you will see every mv command being executed with the actual arguments passed by the sed editor script (yeah, another script inside the script :-) ).
However, if you like complexity, you can even use awk and if you like to install other programs to just do basic work, there is ren. I know even people who would prefer to write a 50-lines or so perl script for this simple every-day task.
Maybe it's easier in finder to rename files, but if connected remotely to a Mac (e.g. via ssh), using finder is not possible at all. That's why cmd line still is very useful.
Based on the selected and most accurate answer above, here's a bash function for reusability:
function change_all_extensions() {
for old in *."$1"; do mv $old `basename $old ."$1"`."$2"; done
}
Usage:
$ change_all_extensions txt md
(I couldn't figure out how to get clean code formatting in a comment on that answer.)
No need to write a script for it just hit this command
find ./ -name "*.txt" | xargs -I '{}' basename '{}' | sed 's/\.txt//' | xargs -I '{}' mv '{}.txt' '{}.md'
You do not need a terminal for this one; here is a sample demonstration in MacOS Big Sur.
Select all the files, right-click and select "rename..."
Add the existing file extension in "Find" and the extension you want to replace with "Replace with".
And done!
I had a similar problem where files were named .gifx.gif at the end and this worked in OS X to remove the last .gif:
for old in *.gifx.gif; do
mv $(echo "$old") $(echo "$old" | sed 's/x.gif//');
done
cd $YOUR_DIR
ls *.txt > abc
mkdir target // say i want to move it to another directory target in this case
while read line
do
file=$(echo $line |awk -F. '{ print $1 }')
cp $line target/$file.md // depends if u want to move(mv) or copy(cp)
done < abc
list=ls
for file in $list
do
newf=echo $file|cut -f1 -d'.'
echo "The newf is $newf"
mv $file $newf.jpg
done

Extract part of a filename shell script

In bash I would like to extract part of many filenames and save that output to another file.
The files are formatted as coffee_{SOME NUMBERS I WANT}.freqdist.
#!/bin/sh
for f in $(find . -name 'coffee*.freqdist)
That code will find all the coffee_{SOME NUMBERS I WANT}.freqdist file. Now, how do I make an array containing just {SOME NUMBERS I WANT} and write that to file?
I know that to write to file one would end the line with the following.
> log.txt
I'm missing the middle part though of how to filter the list of filenames.
You can do it natively in bash as follows:
filename=coffee_1234.freqdist
tmp=${filename#*_}
num=${tmp%.*}
echo "$num"
This is a pure bash solution. No external commands (like sed) are involved, so this is faster.
Append these numbers to a file using:
echo "$num" >> file
(You will need to delete/clear the file before you start your loop.)
If the intention is just to write the numbers to a file, you do not need find command:
ls coffee*.freqdist
coffee112.freqdist coffee12.freqdist coffee234.freqdist
The below should do it which can then be re-directed to a file:
$ ls coffee*.freqdist | sed 's/coffee\(.*\)\.freqdist/\1/'
112
12
234
Guru.
The previous answers have indicated some necessary techniques. This answer organizes the pipeline in a simple way that might apply to other jobs as well. (If your sed doesn't support ‘;’ as a separator, replace ‘;’ with ‘|sed’.)
$ ls */c*; ls c*
fee/coffee_2343.freqdist
coffee_18z8.x.freqdist coffee_512.freqdist coffee_707.freqdist
$ find . -name 'coffee*.freqdist' | sed 's/.*coffee_//; s/[.].*//' > outfile
$ cat outfile
512
18z8
2343
707

BASH file attribute gymnastics: How do I easily get a file with full paths and privileges?

Dear Masters of The Command Line,
I have a directory tree for which I want to generate a file that contains on two entries per line: full path for each file and the corresponding privileges of said file.
For example, one line might contain:
/v1.6.0.24/lib/mylib.jar -r-xr-xr-x
The best way to generate the left hand column there appears to be find. However, because ls doesn't seem to have a capability to either read a list of filenames or take stdin, it looks like I have to resort to a script that does this for me. ...Cumbersome.
I was sure I've seen people somehow get find to run a command against each file found but I must be daft this morning as I can't seem to figure it out!
Anyone?
In terms of reading said file there might be spaces in filenames, so it sure would be nice if there was a way to get some of the existing command-line tools to count fields right to left. For example, we have cut. However, cut is left-hand-first and won't take a negative number to mean start the numbering on the right (as seems the most obvious syntax to me). ... Without having to write a program to do it, are there any easy ways?
Thanks in advance, and especial thanks for explaining any examples you may provide!
Thanks,
RT
GNU findutils 4.2.5+:
find -printf "$PWD"'/%p %M\n'
It can also be done with ls and awk:
ls -l -d $PWD/* | awk '{print $9 " " $1}' > my_files.txt
stat -c %A file
Will print file permissions for file.
Something like:
find . -exec echo -ne '{}\t\t' ';' -exec stat -c %A {} ';'
Will give you a badly formatted version of what your after.
It is made much trickier because you want everything aligned in tables. You might want to look into the 'column' command. TBH I would just relax my output requirements a little bit. Formatting output in SH is a pain in the ass.
bash 4
shopt -s globstar
for file in /path/**
do
stat -c "%n %A" "$file"
done

How can I process a list of files that includes spaces in its names in Unix?

I'm trying to list the files in a directory and do something to them in the Mac OS X prompt.
It should go like this: for f in $(ls -1); do echo $f; done
If I have files without spaces in their names (fileA.txt, fileB.txt), the echo works fine.
If the files include spaces in their names ("file A.txt", "file B.txt"), I get 4 strings (file, A.txt, file, B.txt).
I've tried quoting the listing command, but it only changed the problem.
If I do this: for f in $(ls -1); do echo $f; done
I get: file A.txt\nfile B.txt
(It displays correctly, but it is a single string and I need the 2 lines separated.
Step away from ls if at all possible. Use find from the findutils package.
find /target/path -type f -print0 | xargs -0 your_command_here
-print0 will cause find to output the names separated by NUL characters (ASCII zero). The -0 argument to xargs tells it to expect the arguments separated by NUL characters too, so everything will work just fine.
Replace /target/path with the path under which your files are located.
-type f will only locate files. Use -type d for directories, or omit altogether to get both.
Replace your_command_here with the command you'll use to process the file names. (Note: If you run this from a shell using echo for your_command_here you'll get everything on one line - don't get confused by that shell artifact, xargs will do the expected right thing anyway.)
Edit: Alternatively (or if you don't have xargs), you can use the much less efficient
find /target/path -type f -exec your_command_here \{\} \;
\{\} \; is the escape for {} ; which is the placeholder for the currently processed file. find will then invoke your_command_here with {} ; replaced by the file name, and since your_command_here will be launched by find and not by the shell the spaces won't matter.
The second version will be less efficient since find will launch a new process for each and every file found. xargs is smart enough to pipe the commands to a newly launched process if it can figure it's safe to do so. Prefer the xargs version if you have the choice.
for f in *; do echo "$f"; done
should do what you want. Why are you using ls instead of * ?
In general, dealing with spaces in shell is a PITA. Take a look at the $IFS variable, or better yet at Perl, Ruby, Python, etc.
Here's an answer using $IFS as discussed by derobert
http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
You can pipe the arguments into read. For example, to cat all files in the directory:
ls -1 | while read FILENAME; do cat "$FILENAME"; done
This means you can still use ls, as you have in your question, or any other command that produces $IFS delimited output.
The while loop makes it much easier to do several things to the argument, and makes complex processing more readable in my opinion. A contrived example:
ls -1 | while read FILE
do
echo 1: "$FILE"
echo 2: "$FILE"
done
look --quoting-style option.
for instance, --quoting-style=c would produce :
$ ls --quoting-style=c
"file1" "file2" "dir one"
Check out the manpage for xargs:
it works like this:
ls -1 /tmp/*.jpeg | xargs rm

Resources