multi process bash within fzf --preview feature - bash

I am trying to use fzf in the following manner, I would like to be able to search for a term within my codebase and then with the preview window be able to see the file which contains the string I am searching for at the line where the string is found.
So far I have managed to fuzzy search through the codebase for various terms by piping a ripgrep search of all files in the directory and below. And I have used cut to parse out the file name for cat or tail to read and print to the preview window. This is the command used for that.
rg . -n | fzf --preview-"cut -d":" -f1 <<< {} | xargs cat"
Note the string represented by {} is in the following format:
myfile.c:72:The string I am fuzzy searching
My issue is that I cannot parse out both the filename and the line number.
I have tried passing a bashscript within the preview command as well as using $() in the following example. (Note that here I use tail with the --lines+N argument to print the file after line N)
rg . -n | fzf --preview-"tail $(cut -d":" -f1 <<< {}) --lines=+$(cut -d":" -f2 <<< {})"
This does not work nor does a variety of variants on this attempt. Any help or feedback is appreciated.
Edit(1) :
I've tried to split it into an array like so
rg . -n | fzf --preview="IFS=":" read -r -a arr <<< {}| xargs tail ${arr[0]} --lines=+${arr[1]}"
This works in that the preview does show the file at the line where the string is found however it does not update as I cycle through other fuzzy found suggestions.

So I eventually figured out a solution that works.
It involves calling a separate script from my subprocess running inside --preview. I used the following script to take the string which fzf passes to --preview (in the format of filename:linenumber:found_string) and then used bat to render a preview window with syntax highlighting.
This method is pretty good but is somewhat resource intensive. I'm hoping to lessen the load by adding to the ignore glob and using ripgrep rather then find as it seems that is more efficient.
The bashscript I call string2arg.sh
#!/bin/bash
string2arg() {
export arg_filename=$(cut -d":" -f1 <<< $1);
export arg_linenum=$(cut -d":" -f2 <<< $1);
min_offset=25
let max_offset="min_offset*3"
min=0
if (($min_offset < $arg_linenum)); then
let min="arg_linenum-$min_offset"
fi
let max="arg_linenum+$max_offset"
bat --color=always --highlight-line $arg_linenum --style=header,grid,numbers --line-range $min:$max $arg_filename;
}
This is then called from my fzf alias for searching as such:
alias fsearch='rg . -n -g "!*.html" | fzf --preview="source $SC/string2arg.sh; string2arg {}"'
where $SC is the path to my bashscript string2arg.sh.
If I'm searching for a term with the intent to open the file its found in at the line its found in I use the following bash alias.
alias vfsearch='export vfile=$(fsearch);vim +$(cut -d":" -f2 <<< $vfile) $(cut -d":" -f1 <<< $vfile)'
Also I happen to use the following defaults for
fzf and find them to work for me although since I've moved to tmux I find it sometimes better to show the preview window above rather then to the side.
export FZF_DEFAULT_COMMAND="fd --type file --color=always"
export FZF_DEFAULT_OPTS="--reverse --inline-info --ansi"
export FZF_COMPLETION_TRIGGER=']]'
I find this extremely useful and am planning on moving it inside of my vim sessions. Hope it helps others !
Screenshot to better illustrate the use case.

Related

Defining a variable using head and cut

might be an easy question, I'm new in bash and haven't been able to find the solution to my question.
I'm writing the following script:
for file in `ls *.map`; do
ID=${file%.map}
convertf -p ${ID}_par #this is a program that I use, no problem
NAME=head -n 1 ${ID}.ind | cut -f1 -d":" #Now: This step is the problem: don't seem to be able to make a proper NAME function. I just want to take the first column of the first line of the file ${ID}.ind
It gives me the return
line 5: bad substitution
any help?
Thanks!
There are a couple of issues in your code:
for file in `ls *.map` does not do what you want. It will fail e.g. if any of the filenames contains a space or *, but there's more. See http://mywiki.wooledge.org/BashPitfalls#for_i_in_.24.28ls_.2A.mp3.29 for details.
You should just use for file in *.map instead.
ALL_UPPERCASE names are generally used for system variables and built-in shell variables. Use lowercase for your own names.
That said,
for file in *.map; do
id="${file%.map}"
convertf -p "${id}_par"
name="$(head -n 1 "${id}.ind" | cut -f1 -d":")"
...
looks like it would work. We just use $( cmd ) to capture the output of a command in a string.

Script to search specific string with within a directory

I am new to this site and shell scripting. I am still very much a novice and haven't had much success scripting because I am "attempting" to learn on my own. I was hoping one of you script guru's could get me on the right track. Here's the situation: I am a network engineer and often I need to find specific lines of code within 100's of files. For instance I might need to find out which devices are running specific code. Typically what I will do is the following which does exactly what I need it to do.
fgrep -w "" * | sort -t/ -k5 -n
I normally have to go to the directory where my configuration files are located and then pop whatever I am looking for in between the quotations to get my search result. What I would like to do is write a script that will ask me what I am searching for, then search the directory I am in, and then return the results. Any help would be greatly appreciated.
Many Thanks,
Diz
Add this to your .bashrc file, or whatever config file is loaded when you login:
mygrep() { fgrep -w "$1" * | sort -t/ -k5 -n; }
export -f mygrep
This sets up an alias that you can then use to search - use double quotes if you have a search string with spaces in:
$ mygrep SEARCH_PATTERN
$ mygrep "SEARCH WITH SPACES"
You can do by as follows
#!/bin/bash
read -p "Enter string you want to search?" str
find . -type f -exec grep "${str}" {} \;

shell script : how to replace file name

I want to change some file names with full path similar to this:
/home/guest/test
⟶ /home/guest/.test.log
I tried the command below but it cannot search "/"
string="/home/guest/test"
substring="/"
replacement="/."
echo ${string/%substring/replacement}.log
You can do something like:
for file in /home/guest/*; do
name=${file##*/}
path=${file%/*}
mv "$file" "$path"'/.'"$name"'.log'
done
Created using bash on a mac, so it might work with whatever shell you are using...
string="/home/guest/test"
echo $string | sed 's/\/\([^\/]\{0,\}\)$/\/.\1.log/'
Using simple shell string replacement wasn't going to work since I know of no way you can target the last occurrence of the / sign as the only replacement.
Update:
Actually I came to think of a alternative way if you know that it is always "/two/directories/in"
string="/home/guest/test"
firstpartofstring=$(echo $string | cut -d\/ -f1-3)
lastpartofstring=$(echo $string | cut -d\/ -f4)
echo ${firstpartofstring}/.${lastpartofstring}.log

bash: shortest way to get n-th column of output

Let's say that during your workday you repeatedly encounter the following form of columnized output from some command in bash (in my case from executing svn st in my Rails working directory):
? changes.patch
M app/models/superman.rb
A app/models/superwoman.rb
in order to work with the output of your command - in this case the filenames - some sort of parsing is required so that the second column can be used as input for the next command.
What I've been doing is to use awk to get at the second column, e.g. when I want to remove all files (not that that's a typical usecase :), I would do:
svn st | awk '{print $2}' | xargs rm
Since I type this a lot, a natural question is: is there a shorter (thus cooler) way of accomplishing this in bash?
NOTE:
What I am asking is essentially a shell command question even though my concrete example is on my svn workflow. If you feel that workflow is silly and suggest an alternative approach, I probably won't vote you down, but others might, since the question here is really how to get the n-th column command output in bash, in the shortest manner possible. Thanks :)
You can use cut to access the second field:
cut -f2
Edit:
Sorry, didn't realise that SVN doesn't use tabs in its output, so that's a bit useless. You can tailor cut to the output but it's a bit fragile - something like cut -c 10- would work, but the exact value will depend on your setup.
Another option is something like: sed 's/.\s\+//'
To accomplish the same thing as:
svn st | awk '{print $2}' | xargs rm
using only bash you can use:
svn st | while read a b; do rm "$b"; done
Granted, it's not shorter, but it's a bit more efficient and it handles whitespace in your filenames correctly.
I found myself in the same situation and ended up adding these aliases to my .profile file:
alias c1="awk '{print \$1}'"
alias c2="awk '{print \$2}'"
alias c3="awk '{print \$3}'"
alias c4="awk '{print \$4}'"
alias c5="awk '{print \$5}'"
alias c6="awk '{print \$6}'"
alias c7="awk '{print \$7}'"
alias c8="awk '{print \$8}'"
alias c9="awk '{print \$9}'"
Which allows me to write things like this:
svn st | c2 | xargs rm
Try the zsh. It supports suffix alias, so you can define X in your .zshrc to be
alias -g X="| cut -d' ' -f2"
then you can do:
cat file X
You can take it one step further and define it for the nth column:
alias -g X2="| cut -d' ' -f2"
alias -g X1="| cut -d' ' -f1"
alias -g X3="| cut -d' ' -f3"
which will output the nth column of file "file". You can do this for grep output or less output, too. This is very handy and a killer feature of the zsh.
You can go one step further and define D to be:
alias -g D="|xargs rm"
Now you can type:
cat file X1 D
to delete all files mentioned in the first column of file "file".
If you know the bash, the zsh is not much of a change except for some new features.
HTH Chris
Because you seem to be unfamiliar with scripts, here is an example.
#!/bin/sh
# usage: svn st | x 2 | xargs rm
col=$1
shift
awk -v col="$col" '{print $col}' "${#--}"
If you save this in ~/bin/x and make sure ~/bin is in your PATH (now that is something you can and should put in your .bashrc) you have the shortest possible command for generally extracting column n; x n.
The script should do proper error checking and bail if invoked with a non-numeric argument or the incorrect number of arguments, etc; but expanding on this bare-bones essential version will be in unit 102.
Maybe you will want to extend the script to allow a different column delimiter. Awk by default parses input into fields on whitespace; to use a different delimiter, use -F ':' where : is the new delimiter. Implementing this as an option to the script makes it slightly longer, so I'm leaving that as an exercise for the reader.
Usage
Given a file file:
1 2 3
4 5 6
You can either pass it via stdin (using a useless cat merely as a placeholder for something more useful);
$ cat file | sh script.sh 2
2
5
Or provide it as an argument to the script:
$ sh script.sh 2 file
2
5
Here, sh script.sh is assuming that the script is saved as script.sh in the current directory; if you save it with a more useful name somewhere in your PATH and mark it executable, as in the instructions above, obviously use the useful name instead (and no sh).
It looks like you already have a solution. To make things easier, why not just put your command in a bash script (with a short name) and just run that instead of typing out that 'long' command every time?
If you are ok with manually selecting the column, you could be very fast using pick:
svn st | pick | xargs rm
Just go to any cell of the 2nd column, press c and then hit enter
Note, that file path does not have to be in second column of svn st output. For example if you modify file, and modify it's property, it will be 3rd column.
See possible output examples in:
svn help st
Example output:
M wc/bar.c
A + wc/qax.c
I suggest to cut first 8 characters by:
svn st | cut -c8- | while read FILE; do echo whatever with "$FILE"; done
If you want to be 100% sure, and deal with fancy filenames with white space at the end for example, you need to parse xml output:
svn st --xml | grep -o 'path=".*"' | sed 's/^path="//; s/"$//'
Of course you may want to use some real XML parser instead of grep/sed.

bash grep 'random matching' string

Is there a way to grab a 'random matching' string via bash from a text file?
I am currently grabbing a download link via bash, curl & grep from a online text file.
Example:
DOWNLOADSTRING="$(curl -o - "http://example.com/folder/downloadlinks.txt" | grep "$VARIABLE")"
from online text file which contains
http://alphaserver.com/files/apple.zip
http://alphaserver.com/files/banana.zip
where $VARIABLE is something the user selected.
Works great, but i wanted to add some mirrors to the text file.
So when the variable 'banana' is selected, text file which i grep contains:
http://alphaserver.com/files/apple.zip
http://betaserver.com/files/apple.zip
http://gammaserver.com/files/apple.zip
http://deltaserver.com/files/apple.zip
http://alphaserver.com/files/banana.zip
http://betaserver.com/files/banana.zip
http://gammaserver.com/files/banana.zip
http://deltaserver.com/files/banana.zip
the code should pick a random 'banana' string and store it as the 'DOWNLOADSTRING' variable.
the current code above can only work with 1 string in the text file, since it grabs everything 'banana'.
What this is for; i wanted to add some mirror downloadlinks for the files in the online text file, and the current code doesn't allow that.
Can i let grep grab one random 'banana' string? (and not all of them)
See this question to see how to get a random line after grep. rl seems like a good candidate
What's an easy way to read random line from a file in Unix command line?
then do a grep ... | rl | head -n 1
Try this:
DOWNLOADSTRING="$(curl -o - "http://example.com/folder/downloadlinks.txt" | grep "$VARIABLE")" |
sort -R | head -1
The output will be random-sorted and then the first line will be selected.
If mirrors.txt has the following data, which you provided in your question:
http://alphaserver.com/files/apple.zip
http://betaserver.com/files/apple.zip
http://gammaserver.com/files/apple.zip
http://deltaserver.com/files/apple.zip
http://alphaserver.com/files/banana.zip
http://betaserver.com/files/banana.zip
http://gammaserver.com/files/banana.zip
http://deltaserver.com/files/banana.zip
Then you can use the following command to get a random "matched string" from the file:
grep -E "${VARIABLE}" mirrors.txt | shuf -n1
Then you can store it as the variable DOWNLOADSTRING by setting it's value with a function call like so:
rand_mirror_call() { grep -E "${1}" mirrors.txt | shuf -n1; }
DOWNLOADSTRING="$(rand_mirror_call ${VARIABLE})"
This will give you a dedicated random line from the text file based on the user's ${VARIABLE} input. It is a lot less typing this way.

Resources