Yanking text from the previous stdout onto the command line - bash

I'd like to set up my Bash in such a way that I could yank text from the previous command's stdout. The example use case I'll use is resolving conflicts during a git rebase.
$ git status
# Not currently on any branch.
# Unmerged paths:
# (use "git reset HEAD <file>..." to unstage)
# (use "git add/rm <file>..." as appropriate to mark resolution)
#
# both modified: app/views/report/index.html.erb
#
$ vim app/views/report/index.html.erb
# .... edit, resolve conflicts ....
$ git add <Alt+.>
The problem is that the easiest way to grab the filename for the 2nd command (vim ...) is to move my hand over to the mouse. One option is screen, but that has its own set of issues as a day-to-day shell. (Not the least of which is that I use and abuse Ctrl+A as a readline shortcut)
Where could I start at making this work for me? Ideally I'd like to be able to pull the Nth line from the stdout of the previous command somewhere that I can manipulate it as a command.

Other than using the mouse, the only way I can think of is to use grep, sed and/or awk, perhaps with tee and/or a Bash function and process substitution and/or process and/or command substitution:
vim $(git status | tee /dev/tty | grep ...)
or
var=$(git status | tee /dev/tty | grep ...)
vim "$var"
git add "$var"
The tee allows you to see the full output while capturing the modified output. Creating a function would allow you to easily pass an argument that would select a certain line:
var=$(some_func 14)
etc.
The disadvantage is that you have to do this from the start. I don't know of any way to do this after the fact without using screen or some other output logging and scripting a rummage through the log.

I don't know of a good, clean solution, but as a hack you could try the script command, which logs all input and output to a file. For GNU script:
$ script -f
Script started, file is typescript
$ ls -1
bar
baz
foo
typescript
$ echo $(tail -3 typescript | head -1)
foo

pipe the output through sed:
git status | sed -n '5p'
to get the 5th line

Related

Shell script not working while writing the same on the Terminal works [duplicate]

I have a simple one-liner that works perfectly in the terminal:
history | sort -k2 | uniq -c --skip-fields=1 | sort -r -g | head
What it does: Gives out the 10 most frequently used commands by the user recently. (Don't ask me why I would want to achieve such a thing)
I fire up an editor and type the same with a #!/bin/bash in the beginning:
#!/bin/bash
history | sort -k2 | uniq -c --skip-fields=1 | sort -r -g | head
And say I save it as script.sh. Then when I go to the same terminal, type bash script.sh and hit Enter, nothing happens.
What I have tried so far: Googling. Many people have similar pains but they got resolved by a sudo su or adding/removing spaces. None of this worked for me. Any idea where I might be going wrong?
Edit:
I would want to do this from the terminal itself. The system on which this script would run may or may not provide permissions to change files in the home folder.
Another question as suggested by BryceAtNetwork23, what is so special about the history command that prevents us from executing it?
Looking at your history only makes sense in an interactive shell. Make that command a function instead of a standalone script. In your ~/.bashrc, put
popular_history() {
history | sort -k2 | uniq -c --skip-fields=1 | sort -r -g | head
}
To use history from a non-interactive shell, you need to enable it; it is only on by default for interactive shells. You can add the following line to the shell script:
set -o history
It still appears that only interactive shells will read the default history file by, well, default, so you'll need to populate the history list explicitly with the next line:
history -r ~/.bash_history
(Read the bash man page for more information on using a file other than the default .bash_history.)
History command is disabled by default on bash script, that's why even
history command won't work in .sh file. for its redirection. Kindly
redirect bash_history file inside the .sh file.
History mechanism can be enabled also by mentioning history file and change run-time parameters as mentioned below
#!/bin/bash
HISTFILE=~/.bash_history
set -o history
Note: mentioned above two lines on the top of the script file. Now history command will work in history.

why git log output redirect to a while loop not working?

I am trying this command in my bash shell script:
git log --oneline --no-decorate --pretty=format:"%s" $oldrev..$newrev
git log --oneline --no-decorate --pretty=format:"%s" $oldrev..$newrev | while read -r line; do
echo "$line"
done
the first git log can print output, but the second one followed by while won't print anything. why ?
I invoke my script like this:( second and third argument passed to $oldrev and $newrev)
./check master a735c2f eb23992
if I add --no-pager option, both will print nothing.
I am using bash 4.4.23(1)-release on fedora 28.
Instead of pretty=format, you should use pretty=tformat:
'tformat:'
The 'tformat:' format works exactly like 'format:', except that it provides "terminator" semantics instead of "separator" semantics.
In other words, each commit has the message terminator character (usually a newline) appended, rather than a separator placed between entries.
This means that the final entry of a single-line format will be properly terminated with a new line, just as the "oneline" format does. For example:
$ git log -2 --pretty=format:%h 4da45bef \
| perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/'
4da45be
7134973 -- NO NEWLINE
$ git log -2 --pretty=tformat:%h 4da45bef \
| perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/'
4da45be
7134973
In addition, any unrecognized string that has a % in it is interpreted as if it has tformat: in front of it.
For example, these two are equivalent:
$ git log -2 --pretty=tformat:%h 4da45bef
$ git log -2 --pretty=%h 4da45bef

Paste the last output and edit it in bash

I like to use bash (on linux) without touching mouse.
I often encounter the following situation.
$ locate libfreetype.a
/usr/lib/x86_64-linux-gnu/libfreetype.a
$ cd /usr/lib/x86_64-linux-gnu
In this case, I copy /usr/lib/x86_64-linux-gnu/ and paste it using mouse or type it. I do not want to do that.
Ideally, the output of locate libfreetype.a is stored in somewhere (maybe in killring??) and paste it with C-y command and edit it on terminal.
Are there good way to do this?
(Just for this example case, there are smart one-line commands. But those are not the desired answers. I want a general solution.)
Another example
Suppose that I remember that there is a memo... in the same directory as libfreetype.a but I forgot the directory name.
$ locate libfreetype.a
/usr/lib/x86_64-linux-gnu/libfreetype.a
$ nano /usr/lib/x86_64-linux-gnu/memo # Tab completion here
$ nano /usr/lib/x86_64-linux-gnu/memo_xxx.txt
if I could cache the output /usr/lib/x86_64-linux-gnu/libfreetype.a and paste it, things are very easy.
(nano $(dirname $(locate libfreetype.a))/memo_xxx.txt works for this case, but if I want to change the path itself, I need to think another technique.)
As noted in the comments, there probably no common way to do this in terminal. But it's possible to redirect the output of the command to program that copy stdin to clipboard, e. g. xclip. If you want to insert and edit copied text in terminal, you need to remove newline characters before copying. Consider following script:
copy.bash
#!/bin/bash
tr '\n' ' ' | xclip
Usage:
$ locate libfreetype.a | copy
$ cd # now press <shift> + <insert>
$ cd /usr/lib/x86_64-linux-gnu/libfreetype.a # continue editing
The xclip command copies its input for pasting into X applications.
The tr '\n' ' ' command translates all newlines into spaces. You need this if you want to paste the text into command line. It strips the trailing newline and joins lines if output contains more than one. If use plain xclip all newline characters are pasted literally, which causes bash to run command immediately after pasting and doesn't allow to edit it.
If output of the command (e. g. locate) is multi-line and you want to choose only one of them to copy (instead of copying all), you can use iselect. iselect reads input and shows the interactive menu for selecting a line/lines and prints it to the standart output.
Use it like this:
$ locate pattern | iselect -a | tr '\n' ' ' | xlip
# locate prints several lines
# iselect allows user to select one line interactively
# the result is copied to clipboard
$ # <shift> + <insert>
This also can be a script:
icopy.bash
#!/bin/bash
iselect -am | tr '\n' ' ' | xclip
(the -m option allows to choose several lines instead of one)
Usage:
$ locate pattern | icopy
Disadvantages of these approaches:
it works only with X sessions since xlcip need the X session to be running
you need to install new software (xclip and, optionally, iselect)
you need to redirect output explicitly, before running the command; so, technically, it cannot be considered as answer. But it is the best solution I have found for myself.
BTW, here is the script on my local machine that I really use quite often:
$ cat ~/bin/copy
#!/bin/bash
paste -sd\ | tr -d '\n' | xsel --clipboard
echo "Copied: $(xsel --clipboard --output)" >&2
$ echo hello | copy
Copied: hello
Links: man iselect, man xclip, man tr, yank.
You can run script (man script) from your .bashrc which generates a live log of your session's output. And bind a shortcut for opening the log file in an editor, so you can insert yanked text back into $READLINE_LINE.
But script captures raw output from interactive programs (such as editors), so if script could be modified to skip interactive output, it would work. Next step would be parsing output, to make navigation faster.
Here is a .bashrc snippet that does this for non-interactive tools only: https://asciinema.org/a/395092
I noticed that a solution to this problem is given by a terminal emulator kitty. We can use a feature called "hints" and keyboard shortcuts configured by default.
As in the original question, let's think of the situation.
$ locate libfreetype.a
/usr/lib/x86_64-linux-gnu/libfreetype.a
$ # you want to input /usr/lib/x86_64-linux-gnu here
If you are using kitty, you can type ctrl+shift+p and then l.
You will enter a mode to select a line from the screen. When you can select the previous line, it is pasted into the current terminal input.
The detail if found in the official documentation.
The configuration associated with the action is written like this.
map ctrl+shift+p>l kitten hints --type line --program -
This means that kitten hints --type line --program - is the command mapped from ctrl+shift+p followed by l.
you could use
!$
like
shell$ echo myDir/
myDir/
shell$ cd !$
cd myDir/
shell$ pwd
/home/myDir

Remove escaping sequences automatically while redirecting

Lots of shell tools such as grep and ls can print colorful texts in the terminal. And when the output is redirected to a regular file, the escaping sequences representing colors are removed and only pure texts are written to the file. How to achieve that?
Use:
if [ -t 1 ]
to test whether stdout is connected to a terminal. If it is, print the escape sequences, otherwise just print plain text.
Specifically, grep has a command-line switch to adjust this setting:
echo hello | grep ll # "ll" is printed in red
echo hello | grep --color=never ll # "ll" is printed without special colouring
Most if not all tools that do this sort of thing will have a similar switch - check the manpages for other tools.
Another way to do this for tools that auto detect whether stdout is connected to the terminal or not is to trick them by piping output though cat:
echo hello | grep ll | cat # "ll" is printed without special colouring
I had the same issue the other day and realized I had the following in my .bashrc
alias grep='grep --color=always'
I changed it to the following and had no further problems
alias grep='grep --color=auto'

how to make a winmerge equivalent in linux

My friend recently asked how to compare two folders in linux and then run meld against any text files that are different. I'm slowly catching on to the linux philosophy of piping many granular utilities together, and I put together the following solution. My question is, how could I improve this script. There seems to be quite a bit of redundancy and I'd appreciate learning better ways to script unix.
#!/bin/bash
dir1=$1
dir2=$2
# show files that are different only
cmd="diff -rq $dir1 $dir2"
eval $cmd # print this out to the user too
filenames_str=`$cmd`
# remove lines that represent only one file, keep lines that have
# files in both dirs, but are just different
tmp1=`echo "$filenames_str" | sed -n '/ differ$/p'`
# grab just the first filename for the lines of output
tmp2=`echo "$tmp1" | awk '{ print $2 }'`
# convert newlines sep to space
fs=$(echo "$tmp2")
# convert string to array
fa=($fs)
for file in "${fa[#]}"
do
# drop first directory in path to get relative filename
rel=`echo $file | sed "s#${dir1}/##"`
# determine the type of file
file_type=`file -i $file | awk '{print $2}' | awk -F"/" '{print $1}'`
# if it's a text file send it to meld
if [ $file_type == "text" ]
then
# throw out error messages with &> /dev/null
meld $dir1/$rel $dir2/$rel &> /dev/null
fi
done
please preserve/promote readability in your answers. An answer that is shorter but harder to understand won't qualify as an answer.
It's an old question, but let's work a bit on it just for fun, without thinking in the final goal (maybe SCM) nor in tools that already do this in a better way. Just let's focus in the script itself.
In the OP's script, there are a lot of string processing inside bash, using tools like sed and awk, sometimes more than once in the same command line or inside a loop executing n times (one per file).
That's ok, but it's necessary to remember that:
Each time the script calls any of those programs, it's created a new process in the OS, and that is expensive in time and resources. So the less programs are called, the better is the performance of script that is executing:
diff 2 times (1 just to print to user)
sed 1 time processing diff result and 1 time for each file
awk 1 time processing sed result and 2 times for each file (processing file result)
file 1 time for each file
That doesn't apply to echo, read, test and others that are builtin commands of bash, so no external program is executed.
meld is the final command that will display the files to user, so it doesn't count.
Even with the builtin commands, redirection pipelines | has a cost too, because the shell has to create pipes, duplicate handles, and maybe even creating forks of the shell (that is a process itself). So again: less is better.
The messages of diff command are locale dependants, so if the system is not in english, the whole script won't work.
Thinking that, let's clean a bit the original script, mantaining the OP's logic:
#!/bin/bash
dir1=$1
dir2=$2
# Set english as current language
LANG=en_US.UTF-8
# (1) show files that are different only
diff -rq $dir1 $dir2 |
# (2) remove lines that represent only one file, keep lines that have
# files in both dirs, but are just different, delete all but left filename
sed '/ differ$/!d; s/^Files //; s/ and .*//' |
# (3) determine the type of file
file -i -f - |
# (4) for each file
while IFS=":" read file file_type
do
# (5) drop first directory in path to get relative filename
rel=${file#$dir1}
# (6) if it's a text file send it to meld
if [[ "$file_type" =~ "text/" ]]
then
# throw out error messages with &> /dev/null
meld ${dir1}${rel} ${dir2}${rel} &> /dev/null
fi
done
A little explaining:
Unique chain of commands cmd1 | cmd2 | ... where the output (stdout) of previous one is the input (stdin) of the next one.
Execute sed just once to execute 3 operations (separated with ;) in diff output:
Deleting lines ending with " differ"
Delete "Files " at the beginning of remaining lines
Delete from " and " to the end of remaining lines
Execute command file once to process the file list in stdin (option -f -)
Use the while bash sentence to read two values separated by : for each line line of stdin.
Use bash variable substitution to extract filename from a variable
Use bash test to compare a file type with a regular expression
For clarity reasons, I didn't considerate that file and directory names may have spaces. In such cases, both scripts will fail. To avoid that is necessary enclose in double quotes any reference to file/dir name variable.
I didn't use awk, because it is powerful enough that can replace almost the entire script ;-)

Resources