How to figure out, why is my shell crash? - bash

When I enter this command:
$ grep -n 'some search' $file | awk '{print 1}' | sed 's/://' | xargs -I{} vim +"{}" $file
It will open, but after quitting vim, the shell crash. It does not react to any input neither for Ctr-C. I have no idea why, how to find out? I suspect there is some infinite loop, because after reboot, there is a lot clearing in terminal. But really have no clue of the reason.
PS:
alias grep: alias grep='grep --color=auto -P'
alias sed: alias sed='sed -E'
No more aliases.

vi changes the terminal settings.
When you onlu want to go to the first match, you can use the linenumber with
vi +$(grep -n 'some search' .bashrc | cut -d: -f1 | head -1) .bashrc
This is still to complicated, you can jump to the match with
vi '/+some search/' "$file"
When you want to go to the second match, just use n.

Related

Using the output of one command as the argument for another

I am attempting to grep a file and pipe the line number out to
vim +{lineNumber} filetoedit
unfortunately Vim throws an error saying
Vim: Warning: Input is not from a terminal
An example:
grep -nF 'Im looking for this' testfile.txt | cut -f1 -d: | xargs vim +{} testfile.tx
The command run by xargs inherits stdin from xargs, so its input is connected to the pipe from cut, not the terminal.
Assign the result to a variable and use that.
line=$(grep -nF 'Im looking for this' testfile.txt | cut -f1 -d: )
vim "+$line" testfile.txt

Using Bash Less and Grep together [duplicate]

Is that possible to use grep on a continuous stream?
What I mean is sort of a tail -f <file> command, but with grep on the output in order to keep only the lines that interest me.
I've tried tail -f <file> | grep pattern but it seems that grep can only be executed once tail finishes, that is to say never.
Turn on grep's line buffering mode when using BSD grep (FreeBSD, Mac OS X etc.)
tail -f file | grep --line-buffered my_pattern
It looks like a while ago --line-buffered didn't matter for GNU grep (used on pretty much any Linux) as it flushed by default (YMMV for other Unix-likes such as SmartOS, AIX or QNX). However, as of November 2020, --line-buffered is needed (at least with GNU grep 3.5 in openSUSE, but it seems generally needed based on comments below).
I use the tail -f <file> | grep <pattern> all the time.
It will wait till grep flushes, not till it finishes (I'm using Ubuntu).
I think that your problem is that grep uses some output buffering. Try
tail -f file | stdbuf -o0 grep my_pattern
it will set output buffering mode of grep to unbuffered.
If you want to find matches in the entire file (not just the tail), and you want it to sit and wait for any new matches, this works nicely:
tail -c +0 -f <file> | grep --line-buffered <pattern>
The -c +0 flag says that the output should start 0 bytes (-c) from the beginning (+) of the file.
In most cases, you can tail -f /var/log/some.log |grep foo and it will work just fine.
If you need to use multiple greps on a running log file and you find that you get no output, you may need to stick the --line-buffered switch into your middle grep(s), like so:
tail -f /var/log/some.log | grep --line-buffered foo | grep bar
you may consider this answer as enhancement .. usually I am using
tail -F <fileName> | grep --line-buffered <pattern> -A 3 -B 5
-F is better in case of file rotate (-f will not work properly if file rotated)
-A and -B is useful to get lines just before and after the pattern occurrence .. these blocks will appeared between dashed line separators
But For me I prefer doing the following
tail -F <file> | less
this is very useful if you want to search inside streamed logs. I mean go back and forward and look deeply
Didn't see anyone offer my usual go-to for this:
less +F <file>
ctrl + c
/<search term>
<enter>
shift + f
I prefer this, because you can use ctrl + c to stop and navigate through the file whenever, and then just hit shift + f to return to the live, streaming search.
sed would be a better choice (stream editor)
tail -n0 -f <file> | sed -n '/search string/p'
and then if you wanted the tail command to exit once you found a particular string:
tail --pid=$(($BASHPID+1)) -n0 -f <file> | sed -n '/search string/{p; q}'
Obviously a bashism: $BASHPID will be the process id of the tail command. The sed command is next after tail in the pipe, so the sed process id will be $BASHPID+1.
Yes, this will actually work just fine. Grep and most Unix commands operate on streams one line at a time. Each line that comes out of tail will be analyzed and passed on if it matches.
This one command workes for me (Suse):
mail-srv:/var/log # tail -f /var/log/mail.info |grep --line-buffered LOGIN >> logins_to_mail
collecting logins to mail service
Coming some late on this question, considering this kind of work as an important part of monitoring job, here is my (not so short) answer...
Following logs using bash
1. Command tail
This command is a little more porewfull than read on already published answer
Difference between follow option tail -f and tail -F, from manpage:
-f, --follow[={name|descriptor}]
output appended data as the file grows;
...
-F same as --follow=name --retry
...
--retry
keep trying to open a file if it is inaccessible
This mean: by using -F instead of -f, tail will re-open file(s) when removed (on log rotation, for sample).
This is usefull for watching logfile over many days.
Ability of following more than one file simultaneously
I've already used:
tail -F /var/www/clients/client*/web*/log/{error,access}.log /var/log/{mail,auth}.log \
/var/log/apache2/{,ssl_,other_vhosts_}access.log \
/var/log/pure-ftpd/transfer.log
For following events through hundreds of files... (consider rest of this answer to understand how to make it readable... ;)
Using switches -n (Don't use -c for line buffering!).By default tail will show 10 last lines. This can be tunned:
tail -n 0 -F file
Will follow file, but only new lines will be printed
tail -n +0 -F file
Will print whole file before following his progression.
2. Buffer issues when piping:
If you plan to filter ouptuts, consider buffering! See -u option for sed, --line-buffered for grep, or stdbuf command:
tail -F /some/files | sed -une '/Regular Expression/p'
Is (a lot more efficient than using grep) a lot more reactive than if you does'nt use -u switch in sed command.
tail -F /some/files |
sed -une '/Regular Expression/p' |
stdbuf -i0 -o0 tee /some/resultfile
3. Recent journaling system
On recent system, instead of tail -f /var/log/syslog you have to run journalctl -xf, in near same way...
journalctl -axf | sed -une '/Regular Expression/p'
But read man page, this tool was built for log analyses!
4. Integrating this in a bash script
Colored output of two files (or more)
Here is a sample of script watching for many files, coloring ouptut differently for 1st file than others:
#!/bin/bash
tail -F "$#" |
sed -une "
/^==> /{h;};
//!{
G;
s/^\\(.*\\)\\n==>.*${1//\//\\\/}.*<==/\\o33[47m\\1\\o33[0m/;
s/^\\(.*\\)\\n==> .* <==/\\o33[47;31m\\1\\o33[0m/;
p;}"
They work fine on my host, running:
sudo ./myColoredTail /var/log/{kern.,sys}log
Interactive script
You may be watching logs for reacting on events?
Here is a little script playing some sound when some USB device appear or disappear, but same script could send mail, or any other interaction, like powering on coffe machine...
#!/bin/bash
exec {tailF}< <(tail -F /var/log/kern.log)
tailPid=$!
while :;do
read -rsn 1 -t .3 keyboard
[ "${keyboard,}" = "q" ] && break
if read -ru $tailF -t 0 _ ;then
read -ru $tailF line
case $line in
*New\ USB\ device\ found* ) play /some/sound.ogg ;;
*USB\ disconnect* ) play /some/othersound.ogg ;;
esac
printf "\r%s\e[K" "$line"
fi
done
echo
exec {tailF}<&-
kill $tailPid
You could quit by pressing Q key.
you certainly won't succeed with
tail -f /var/log/foo.log |grep --line-buffered string2search
when you use "colortail" as an alias for tail, eg. in bash
alias tail='colortail -n 30'
you can check by
type alias
if this outputs something like
tail isan alias of colortail -n 30.
then you have your culprit :)
Solution:
remove the alias with
unalias tail
ensure that you're using the 'real' tail binary by this command
type tail
which should output something like:
tail is /usr/bin/tail
and then you can run your command
tail -f foo.log |grep --line-buffered something
Good luck.
Use awk(another great bash utility) instead of grep where you dont have the line buffered option! It will continuously stream your data from tail.
this is how you use grep
tail -f <file> | grep pattern
This is how you would use awk
tail -f <file> | awk '/pattern/{print $0}'

Curl and xargs in piped commands

I want to process an old database where password are plain text (comma separated ; passwd is the 5th field in the csv file where the database has been exported) to crypt them for further use by dokuwiki. Here is my bash command (grep and sed are there to extract the crypted passwd from curl output) :
cat users.csv | awk 'FS="," { print $4 }' | xargs -l bash -c 'curl -s --data-binary "pass1=$0&pass2=$0" "https://sprhost.com/tools/SMD5.php" -o - ' | xargs | grep -o '<tt.*tt>' | sed -e 's/tt//g' | sed -e 's/<[^>]*>//g'
I get the following comment from xargs
xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option
And only the first line of the file is processed, and nothing appends then.
Using the -0 option, and playing around with quotes, doesn't solve anything. Where am I wrong in the command line ? May be a more advanced language will be more adequate to do this.
Thank for help, LM
In general, if you have such a long pipe of commands, it is better to split them if things go wrong. Going through your pipe:
cat users.csv |
Nothing unexpected there.
awk 'FS="," { print $4 }' |
You probably wanted to do awk 'BEGIN {FS=","} { print $4 }'. Try the first two commands in the pipe and see if they produce the correct answer.
xargs -l bash -c 'curl -s --data-binary "pass1=$0&pass2=$0" "https://sprhost.com/tools/SMD5.php" -o - ' |
Nothing wrong there, although there might be better ways to do an MD5 hash.
xargs |
What is this xargs doing in the pipe? It should be removed.
grep -o '<tt.*tt>' |
Note that this will produce two lines:
<tt>$1$17ab075e$0VQMuM3cr5CtElvMxrPcE0</tt>
<tt><your_docuwiki_root>/conf/users.auth.php</tt>
which is probably not what you expected.
sed -e 's/tt//g' |
sed -e 's/<[^>]*>//g'
which will remove the html-tags, though
sed 's/<tt>//;s/<.tt>//'
will do the same.
So I'd say a wrong awk and an xargs too many.

How do I pipe the last command in my command history to clipboard?

I'm a total noob when it comes to grep/awk/sed/cut so I need help with this. I've got this: history | tail -n 1 | pbcopy which returns 1968* mv ~/iPhoto\ Library.zip ./ ; bell which is great because that's the last command I ran, but I need to remove the numbers at the beginning. I've tried various iterations of awk, grep, sed and cut, but like I said I'm a noob when it comes to those kinds of commands. How would I do that?
You could try this sed command,
history | tail -n 1 | pbcopy | sed 's/^[0-9]\+//g'
Through awk,
history | tail -n 1 | pbcopy | awk '{sub(/^[0-9]+/,"")}1'
Output:
mv ~/iPhoto\ Library.zip ./ ; bell
Just pipe your output to
awk '{for(i=2;i<NF;i++)printf "%s",$i OFS; if (NF) printf "%s",$NF; printf ORS}'
Output:
mv ~/iPhoto\ Library.zip ./ ; bell
In zsh you can just tell history (which is a synonym to fc -l) to not print the numbers with -n. Also, you can get it to print only the last entry with -1:
history -n -1 | pcopy
fc -l -n -1 | pcopy
In bash history has no options for this, but there is also a fc command, which even supports the needed options. Unfortunatelly, 'suppress command numbers' (from man 1 bash) seems to mean 'print a TAB instead', so the output starts with a TAB and a space, which can be removed with sed
fc -l -n -1 | sed 's/^\t //' | pcopy

Getting a zsh alias including a pipe to execute

I wanted a command that would quickly copy the current tmux window layout to the clipboard on Mac using zsh. I came up with the following:
tmux list-windows | awk '{print $7}' | sed 's/\]$//' | pbcopy
When I run this from the command line it works perfectly with an output like the following:
d97b,135x32,0,0[135x16,0,0{87x16,0,0,0,47x16,88,0,1},135x15,0,17{87x15,0,17,2,47x15,88,17,3}]
However, I can't seem to run it as an alias. If I add the line:
alias layout="tmux list-windows | awk '{print $7}' | sed 's/\]$//' | pbcopy"
to my .zshrc file when I run layout the command does not work as expected. It instead outputs the full tmux list-windows command with the word layout replacing the session name:
0: layout* (4 panes) [135x32] [layout d97b,135x32,0,0[135x16,0,0{87x16,0,0,0,47x16,88,0,1},135x15,0,17{87x15,0,17,2,47x15,88,17,3}]] #0 (active)
What am I doing wrong?
Thanks.
alex_i is correct, if you escape the $7 everything works.
alias layout="tmux list-windows | awk '{print \$7}' | sed 's/\]$//' | pbcopy"
Note the backslash before the $7.
Don't use an alias; use a function:
layout () {
tmux list-windows | awk '{print $7}' | sed 's/\]$//' | pbcopy
}
Then you don't need to worry about quoting.
Is your '$7' interpreted during the .zshrc loading ? Couldn't it be the issue ?

Resources