Find subsequent commands in history - bash

To get a list of all previous ssh commands I can type:
$ history | grep ssh
1234 ssh x#y.z
1235 ssh y#z.a
1236 ssh z#a.b
…
But I am searching for a way to get a list all ssh commands followed by a rsync command. So the result should look like that:
1234 ssh x#y.z
1235 rsync y#z.de …
…
4321 ssh y#z.a
4322 rsync z#a.b …
So I am basically trying to find subsequent words in subsequent lines…

One option is to use:
history | grep -A 1 ssh | grep -B 1 rsync
which is non optimal because it will match cases in which you ran rsync and ssh in the same line.
The you can try better:
history | cut -c 8- | grep -A 1 ^ssh | grep -B 1 ^rsync
Here I am using history command as you were doing (because another alternative would have been to use the history file).
Then I remove the line numbers with the cut. (This can be not very elegant because considers that always there are 8 characters the the column of the line number in the history output, you might have to check if 8 is the right number (for the versions I can check, it is :) )).
And then I grep lines that start (^) with ssh and I ask to print that line and one line after (-A 1)
And then I grep for lines that start (^) with rsync an I print those lines the the previous one (-B 1)

Related

Search all occurences of a instance ids in the variable

I have a bash variable which has the following content:
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
I want to search the string starting with i- and then extract only that instance id. So, for the above input, I want to have output like below:
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
I am open to use grep, awk, sed.
I am trying to achieve my task by using following command but it gives me whole line:
grep -oE 'i-.*'<<<$variable
Any help?
You can just change your grep command to:
grep -oP 'i-[^\s]*' <<<$variable
Tested on your input:
$ cat test
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
$ var=`cat test`
$ grep -oP 'i-[^\s]*' <<<$var
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
grep is exactly what you need for this task, sed would be more suitable if you had to reformat the input and awk would be nice if you had either to reformat a string or make some computation of some fields in the rows, columns
Explanation:
-P is to use perl regex
i-[^\s]* is a regex that will match literally i- followed by 0 to N non space character, you could change the * by a + if you want to impose that there is at least 1 char after the - or you could use {min,max} syntax to impose a range.
Let me know if there is something unclear.
Bonus:
Following the comment of Sundeep, you can use one of the improved versions of the regex I have proposed (the first one does use PCRE and the second one posix regex):
grep -oP 'i-\S*' <<<$var
or
grep -o 'i-[^[:blank:]]*' <<<$var
You could use following too(I tested it with GNU awk):
echo "$var" | awk -v RS='[ |\n]' '/^i-/'
You can also use this code (Tested in unix)
echo $test | grep -o "i-[0-z]*"
Here,
-o # Prints only the matching part of the lines
i-[0-z]* # This regular expression, matches all the alphabetical and numerical characters following 'i-'.

Bash pipes and Shell expansions

I've changed my data source in a bash pipe from cat ${file} to cat file_${part_number} because preprocessing was causing ${file} to be truncated at 2GB, splitting the output eliminated the preprocessing issues. However while testing this change, I was unable to work out how to get Bash to continue acting the same for some basic operations I was using to test the pipeline.
My original pipeline is:
cat giantfile.json | jq -c '.' | python postprocessor.py
With the original pipeline, if I'm testing changes to postprocessor.py or the preprocessor and I want to just test my changes with a couple of items from giantfile.json I can just use head and tail. Like so:
cat giantfile.json | head -n 2 - | jq -c '.' | python postprocessor.py
cat giantfile.json | tail -n 3 - | jq -c '.' | python postprocessor.py
The new pipeline that fixes the issues the preprocessor is:
cat file_*.json | jq -c '.' | python postprocessor.py
This works fine, since every file gets output eventually. However I don't want to wait 5-10 minutes for each tests. I tried to test with the first 2 lines of input with head.
cat file_*.json | head -n 2 - | jq -c '.' | python postprocessor.py
Bash sits there working far longer than it should, so I try:
cat file_*.json | head -n 2 - | jq -c '.'
And my problem is clear. Bash is outputting the content of all the files as if head was not even there because each file now has 1 line of data in it. I've never needed to do this with bash before and I'm flummoxed.
Why does Bash behave this way, and How do I rewrite my little bash command pipeline to work the way it used to, allowing me to select the first/last n lines of data to work with for testing?
My guess is that when you split the json up into individual files, you managed to remove the newline character from the end of each line, with the consequence that the concatenated file (cat file_json.*) is really only one line in total, because cat will not insert newlines between the files it is concatenating.
If the files were really one line each with a terminating newline character, piping through head -n 2 should work fine.
You can check this hypothesis with wc, since that utility counts newline characters rather than lines. If it reports that the files have 0 lines, then you need to fix your preprocessing.

Tail multiple remote files and pipe the result

I'm looking for a way to pipe multiple log files on multiple remote servers, and then pipe the result to another program.
Right now I'm using multitail, but it does not exactly do what I need, or maybe I'm doing something wrong!
I would like to be able to send the merge of all log files, to another program. For example jq. Right now if I do:
multitail --mergeall -l 'ssh server1 "tail -f /path/to/log"' -l 'ssh server2 "tail -f /path/to/log"' -l 'ssh server3 "tail -f /path/to/log"' | jq .
for instance, I get this:
parse error: Invalid numeric literal at line 1, column 2
But more generally, I would like to give the output of this to another program I use to parse and display logs :-)
Thanks everybody!
One way to accomplish this feat would be to pipe all your outputs together into a named pipe and then deal with the output from that named pipe.
First, create your named pipe: $ mknod MYFIFO p
For each location you want to consolidate lines from, $ tail -f logfile > MYFIFO (note, the tail -f can be run through an ssh session).
Then have another process take the data out of the named pipe and handle it appropriately. An ugly solution could be:
$ tail -f MYFIFO | jq
Season to taste.

Joining every group of N lines into one with bash

I would like to join every group of N lines in the output of another command using bash.
Are there any standard linux commands i can use to achieve this?
Example:
./command
46.219464 0.000993
17.951781 0.002545
15.770583 0.002873
87.431820 0.000664
97.380751 0.001921
25.338819 0.007437
Desired output:
46.219464 0.000993 17.951781 0.002545
15.770583 0.002873 87.431820 0.000664
97.380751 0.001921 25.338819 0.007437
If your output has consistent number of fields, you can use xargs -n N to group on X elements per line:
$ ...command... | xargs -n4
46.219464 0.000993 17.951781 0.002545
15.770583 0.002873 87.431820 0.000664
97.380751 0.001921 25.338819 0.007437
From man xargs:
-n max-args, --max-args=max-args
Use at most max-args arguments per command line. Fewer than max-args
arguments will be used if the size (see the -s option) is exceeded,
unless the -x option is given, in which case xargs will exit.
Seems like you're trying to join every two lines with the delimiter \t(tab). If yes then you could try the below paste command,
command | paste -d'\t' - -
If you want space as delimiter then use -d<space>,
command | paste -d' ' - -

appending file contents as parameter for unix shell command

I'm looking for a unix shell command to append the contents of a file as the parameters of another shell command. For example:
command << commandArguments.txt
xargs was built specifically for this:
cat commandArguments.txt | xargs mycommand
If you have multiple lines in the file, you can use xargs -L1 -P10 to run ten copies of your command at a time, in parallel.
xargs takes its standard in and formats it as positional parameters for a shell command. It was originally meant to deal with short command line limits, but it is useful for other purposes as well.
For example, within the last minute I've used it to connect to 10 servers in parallel and check their uptimes:
echo server{1..10} | tr ' ' '\n' | xargs -n 1 -P 50 -I ^ ssh ^ uptime
Some interesting aspects of this command pipeline:
The names of the servers to connect to were taken from the incoming pipe
The tr is needed to put each name on its own line. This is because xargs expects line-delimited input
The -n option controls how many incoming lines are used per command invocation. -n 1 says make a new ssh process for each incoming line.
By default, the parameters are appended to the end of the command. With -I, one can specify a token (^) that will be replaced with the argument instead.
The -P controls how many child processes to run concurrently, greatly widening the space of interesting possibilities..
command `cat commandArguments.txt`
Using backticks will use the result of the enclosed command as a literal in the outer command

Resources