How to use xargs to delete redis keys containing `\\`? - xargs

Goal: I want to delete redis keys matching a pattern and for this I use xargs
If I use this command
redis-cli KEYS "*SomeService::getFromId*"
I see a lot of results, example
redis-cli KEYS "*SomeService::getFromId*
1) "sa.:App\\Services\\SomeService::getFromId.deb525724eacbadb4ccdda90d787a41e"
2) "sa.:App\\Services\\SomeService::getFromId.0e8333deeded62761735adab2a6516f5"
Then when I run:
redis-cli KEYS "*SomeService::getFromId*" | xargs -n 1 redis-cli del
I get
(integer) 0
(integer) 0
So nothing is deleted.
If I try to debug by running xargs with echo I get this result:
redis-cli KEYS "*SomeService::getFromId*" | xargs -n 1 echo
sa.:AppServicesSomeService::getFromId.5683abfb173fe66bc078df0a6a85eeb7
sa.:AppServicesSomeService::getFromId.4df008768a8e05b6dd7bdab51b00a774
I notice that the \\ is changed and removed by xargs -n 1.
How to keep the correct key so that means
sa.:App\\Services\\SomeService::getFromId.deb525724eacbadb4ccdda90d787a41e
instead of removing \\ and having
sa.:AppServicesSomeService::getFromId.5683abfb173fe66bc078df0a6a85eeb7

I just skipped using xargs and used instead after running redis-cli the following
EVAL "return redis.call('del', unpack(redis.call('keys', '*SomeService::getFromId*')))" 0
And worked great
got output
(integer) 96
and all 96 keys of that pattern were gone.

Related

Use For and save and saa in diferent files

I'm trying to take 2 files from one command, in one file I only put 1 entries and the other complete a list, this is the example:
I tried various commands
#!/bin/bash
for i in range 4
do
echo "test" >one >>list
done
I need what in the "one" save the last one loop and in the "list" everyone.
You can use the tee for this, since tee will still write to stdout you can do something like
#!/bin/bash
for i in range 4
do
echo "test" | tee one >>list
done
or this if you want to see echos when you run it, the -a flag tells tee to append rather than truncate
#!/bin/bash
for i in range 4
do
echo "test" | tee one | tee -a list
done

Find subsequent commands in history

To get a list of all previous ssh commands I can type:
$ history | grep ssh
1234 ssh x#y.z
1235 ssh y#z.a
1236 ssh z#a.b
…
But I am searching for a way to get a list all ssh commands followed by a rsync command. So the result should look like that:
1234 ssh x#y.z
1235 rsync y#z.de …
…
4321 ssh y#z.a
4322 rsync z#a.b …
So I am basically trying to find subsequent words in subsequent lines…
One option is to use:
history | grep -A 1 ssh | grep -B 1 rsync
which is non optimal because it will match cases in which you ran rsync and ssh in the same line.
The you can try better:
history | cut -c 8- | grep -A 1 ^ssh | grep -B 1 ^rsync
Here I am using history command as you were doing (because another alternative would have been to use the history file).
Then I remove the line numbers with the cut. (This can be not very elegant because considers that always there are 8 characters the the column of the line number in the history output, you might have to check if 8 is the right number (for the versions I can check, it is :) )).
And then I grep lines that start (^) with ssh and I ask to print that line and one line after (-A 1)
And then I grep for lines that start (^) with rsync an I print those lines the the previous one (-B 1)

Search all occurences of a instance ids in the variable

I have a bash variable which has the following content:
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
I want to search the string starting with i- and then extract only that instance id. So, for the above input, I want to have output like below:
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
I am open to use grep, awk, sed.
I am trying to achieve my task by using following command but it gives me whole line:
grep -oE 'i-.*'<<<$variable
Any help?
You can just change your grep command to:
grep -oP 'i-[^\s]*' <<<$variable
Tested on your input:
$ cat test
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
$ var=`cat test`
$ grep -oP 'i-[^\s]*' <<<$var
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
grep is exactly what you need for this task, sed would be more suitable if you had to reformat the input and awk would be nice if you had either to reformat a string or make some computation of some fields in the rows, columns
Explanation:
-P is to use perl regex
i-[^\s]* is a regex that will match literally i- followed by 0 to N non space character, you could change the * by a + if you want to impose that there is at least 1 char after the - or you could use {min,max} syntax to impose a range.
Let me know if there is something unclear.
Bonus:
Following the comment of Sundeep, you can use one of the improved versions of the regex I have proposed (the first one does use PCRE and the second one posix regex):
grep -oP 'i-\S*' <<<$var
or
grep -o 'i-[^[:blank:]]*' <<<$var
You could use following too(I tested it with GNU awk):
echo "$var" | awk -v RS='[ |\n]' '/^i-/'
You can also use this code (Tested in unix)
echo $test | grep -o "i-[0-z]*"
Here,
-o # Prints only the matching part of the lines
i-[0-z]* # This regular expression, matches all the alphabetical and numerical characters following 'i-'.

Print out a statement before each output of my script

I have a script that checks each file in a folder for the word "Author" and then prints out the number of times, one line per file, in order from highest to lowest. In total I have 825 files. An example output would be
53
22
17
I want to make it so that I print out something before each number on every line. This will be the following hotel_$i so the above example would now be:
hotel_1 53
hotel_2 22
hotel_3 17
I have tried doing this using a for loop in my shell script:
for i in {1..825}
do
echo "hotel_$i"
find . -type f -exec bash -c 'grep -wo "Author" {} | wc -l' \; | sort -nr
done
but this basically prints out hotel_1, then does the search and sort for all 825 files, then hotel_2 repeats the search and sort and so on. How do I make it so that it prints before every output?
You can use the paste command, which combines lines from different files:
paste <(printf 'hotel_%d\n' {1..825}) \
<(find . -type f -exec bash -c 'grep -wo "Author" {} | wc -l' \; | sort -nr)
(Just putting this on two lines for readability, can be a one-liner without the \.)
This combines paste with process substitution, making the output of a command look like a file (a named pipe) to paste.
The first command prints hotel_1, hotel_2 etc. on a separate line each, and the second command is your find command.
For short input files, the output looks like this:
hotel_1 7
hotel_2 6
hotel_3 4
hotel_4 3
hotel_5 3
hotel_6 2
hotel_7 1
hotel_8 0
hotel_9 0

xargs input involving spaces

I am working on a Mac using OSX and I'm using bash as my shell. I have a script that goes something to the effect of:
VAR1="pass me into parallel please!"
VAR2="oh me too, and there's actually a lot of us, but its best we stay here too"
printf "%s\n" {0..249} | xargs -0 -P 8 -n 1 . ./parallel.sh
I get the error: xargs: .: Permission denied. The purpose is to run a another script in parallel (called parallel.sh) which get's fed the numbers 0-249. Additionally I want to make sure that parallel can see and us VAR1 and VAR2. But when I try to source the script parallel with . ./parallel, xargs doesn't like that. The point of sourcing is because the script has other variables I wish parallel to have access to.
I have read something about using print0 since xargs separates it's inputs by spaces, but I really didn't understand what -print0 does and how to use it. Thanks for any help you guys can offer.
If you want the several processes running the script, then they can't be part of the parent process and therefore they can't access the exact same variables. However, if you export your variables, then each process can get a copy of them:
export VAR1="pass me into parallel please!"
export VAR2="oh me too, and there's actually a lot of us, but its best we stay here too"
printf "%s\n" {0..249} | xargs -P 8 -n 1 ./parallel.sh
Now you can just drop the extra dot since you aren't sourcing the parallel.sh script, you are just running it.
Also there is no need to use -0 since your input is just a series of numbers, one on each line.
To avoid the space problem I'd use new line character as separator for xargs with the -d option:
xargs -d '\n' ...
i think you have permission issues , try getting a execute permission for that file "parallel.sh"
command works fine for me :
Kaizen ~/so_test $ printf "%s\n" {0..4} | xargs -0 -P 8 -n 1 echo
0
1
2
3
4
man find :
-print0
True; print the full file name on the standard output, followed by a
null character (instead of the newline character that -print uses).
This allows file names that contain newlines or other types of white space to be correctly interpreted by programs that process the find
output. This option corresponds to the -0 option of xargs.
for print0 use : check the link out : there is a question for it in stack overflow
Capturing output of find . -print0 into a bash array
The issue of passing arguments is related to xarg's interpretation of white space. From the xargs man page:
-0 Change xargs to expect NUL (``\0'') characters as separators, instead of spaces and newlines.
The issue of environment variables can be solved by using export to make the variables available to subprocesses:
say.sh
echo "$1 $V"
result
bash$ export V=whatevs
bash$ printf "%s\n" {0..3} | xargs -P 8 -n 1 ./say.sh
1 whatevs
2 whatevs
0 whatevs
3 whatevs

Resources