unix - how to create a single string from a list of multiple outputs - bash

I've got a snippet here that i'm running on the command line which creates the following output:
$ { time mysql -u root -N -e "select NOW();" >/dev/null; } 2>&1 | grep real; echo ":localhost:"; date +"%m-%d-%y"
real 0m0.022s
:localhost:
04-28-17
I'd like my output to be a single string like so: (or delimited by whatever I choose as delimiter if possible)
real 0m0.022s :localhost:04-28-17
What command can I use to concat or join to create my string? Thanks.

Something like this should work, assuming bash as your shell:
echo "$(grep real < <({ time mysql -u root -N -e 'select NOW();'; } 2>&1)):localhost:$(date +'%m-%d-%y')"
The first $(...) could also be your original { time mysql -u root -N -e 'select NOW();'; } 2>&1 | grep real. I don't see a particularly compelling reason to prefer one way over the other.
The core concept, though, is that doing echo "$(...)" strips trailing newlines off the output of whatever's inside $(...)...

You can just wrap that whole beast up and send it to sed to wipe out the linefeeds:
{ { time mysql -u root -N -e "select NOW();" >/dev/null; } 2>&1 | grep real; echo ":localhost:"; date +"%m-%d-%y"; } | sed ':a;N;$!ba;s/\n/ /g'
In Action:
$ { { time mysql -u root -N -e "select NOW();" >/dev/null; } 2>&1 | grep real; echo ":localhost:"; date +"%m-%d-%y"; } | sed ':a;N;$!ba;s/\n/ /g'
real 0m0.412s :localhost: 04-28-17

Related

What does following linux script mean in page 70 of book "Design Data intensive applications" by Martin Kleppmann?

#!/bin/bash
db_set () {
echo "$1,$2" >> database
}
db_get () {
grep "^$1," database | sed -e "s/^$1,//" | tail -n 1
}
What does db_get() do?
Especially "sed -e "s/^$1,//""
db_get() prints the last value for the key $1.
$1,$2 are arguments to the script e.g. 1=money, 2=34.
grep "^$1," database lists all lines starting with $1.
sed -e "s/^$1,//" then removes the key, part, so that only the values remain.
tail -n 1 prints only the last line.
You can try this out by yourself with e.g.
$ cat database
jack,5
gill,6
jack,3
$ key=jack
$ grep "^$key," database
jack,5
jack,3
$ grep "^$key," database | sed -e "s/^$key,//"
5
3
$ grep "^$key," database | sed -e "s/^$key,//" | tail -n 1
3

GNU parallel with custom script doing string comparison

The follwoing script.sh compares part of a string (coming from stdin by cating a csv file) to a defined string and reports the differences in a certain format
#!/usr/bin/env bash
reference="ABCDEFG"
ref_transp=$(echo "$reference" | sed -e 's/\(.\)/\1\n/g')
while read line; do
line_transp=$(echo "$line" | cut -d',' -f2 | sed -e 's/\(.\)/\1\n/g')
output=$(paste -d ' ' <(echo "$ref_transp") <(echo "$line_transp") | grep -vnP '([A-Z]) \1' | sed -E 's/([0-9][0-9]*):([A-Z]) ([A-Z]*)/\2\1\3/' | grep '^[A-Z][0-9][0-9]*[A-Z*]$')
echo "$(echo ${line:0:35}, $output)"
done < "${1:-/dev/stdin}"
It is intendet to be executed on a number of rows from a very large file in the format
XYZ,ABMDEFG
and it works well when i use it in a pipe:
cat large_file | ./find_something.sh
However, when I try to use it with parallel, i get this error:
$ cat large_file | parallel ./find_something.sh
./find_something.sh: line 9: XYZ, ABMDEFG : No such file or directory
What is causing this? Is parallel supposed to work for something like this, if I want to redirect the output to a single file afterwards?
Less important side note: I'm rather proud of my string comparison method, but if someone has a faster way to get from comparing ABCDEFG and XYZ,ABMDEFG to obtain XYZ,C3M I'd be happy to hear that, too.
Edit:
I should have said, I also want to preserve the order of each line in the output, corresponding to the input. Is that possible using parallel?
Your script accepts its input from a file (defaulting to stdin), whereas parallel will pass input as arguments, not via stdin. In that sense, parallel is closer to xargs.
Presumably, you want each of the lines in large_file to be processed as a unit, possibly in parallel.
That means you need your script to only process one such line at a time, and let parallel call your script many times, once for each line.
So your script should look like this:
#!/usr/bin/env bash
reference="ABCDEFG"
ref_transp=$(echo "$reference" | sed -e 's/\(.\)/\1\n/g')
line="$1"
line_transp=$(echo "$line" | cut -d',' -f2 | sed -e 's/\(.\)/\1\n/g')
output=$(paste -d ' ' <(echo "$ref_transp") <(echo "$line_transp") | grep -vnP '([A-Z]) \1' | sed -E 's/([0-9][0-9]*):([A-Z]) ([A-Z]*)/\2\1\3/' | grep '^[A-Z][0-9][0-9]*[A-Z*]$')
echo "$(echo ${line:0:35}, $output)"
Then you can redirect to a file as follows:
cat large_file | parallel ./find_something.sh > output_file
-k keeps the order.
#!/usr/bin/env bash
doit() {
reference="ABCDEFG"
ref_transp=$(echo "$reference" | sed -e 's/\(.\)/\1\n/g')
while read line; do
line_transp=$(echo "$line" | cut -d',' -f2 | sed -e 's/\(.\)/\1\n/g')
output=$(paste -d ' ' <(echo "$ref_transp") <(echo "$line_transp") | grep -vnP '([A-Z]) \1' | sed -E 's/([0-9][0-9]*):([A-Z]) ([A-Z]*)/\2\1\3/' | grep '^[A-Z][0-9][0-9]*[A-Z*]$')
echo "$(echo ${line:0:35}, $output)"
done
}
export -f doit
cat large_file | parallel --pipe -k doit
#or
parallel --pipepart -a large_file --block -10 -k doit

Set a command to a variable in bash script problem

Trying to run a command as a variable but I am getting strange results
Expected result "1" :
grep -i nosuid /etc/fstab | grep -iq nfs
echo $?
1
Unexpected result as a variable command:
cmd="grep -i nosuid /etc/fstab | grep -iq nfs"
$cmd
echo $?
0
It seems it returns 0 as the command was correct not actual outcome. How to do this better ?
You can only execute exactly one command stored in a variable. The pipe is passed as an argument to the first grep.
Example
$ printArgs() { printf %s\\n "$#"; }
# Two commands. The 1st command has parameters "a" and "b".
# The 2nd command prints stdin from the first command.
$ printArgs a b | cat
a
b
$ cmd='printArgs a b | cat'
# Only one command with parameters "a", "b", "|", and "cat".
$ $cmd
a
b
|
cat
How to do this better?
Don't execute the command using variables.
Use a function.
$ cmd() { grep -i nosuid /etc/fstab | grep -iq nfs; }
$ cmd
$ echo $?
1
Solution to the actual problem
I see three options to your actual problem:
Use a DEBUG trap and the BASH_COMMAND variable inside the trap.
Enable bash's history feature for your script and use the hist command.
Use a function which takes a command string and executes it using eval.
Regarding your comment on the last approach: You only need one function. Something like
execAndLog() {
description="$1"
shift
if eval "$*"; then
info="PASSED: $description: $*"
passed+=("${FUNCNAME[1]}")
else
info="FAILED: $description: $*"
failed+=("${FUNCNAME[1]}")
done
}
You can use this function as follows
execAndLog 'Scanned system' 'grep -i nfs /etc/fstab | grep -iq noexec'
The first argument is the description for the log, the remaining arguments are the command to be executed.
using bash -x or set -x will allow you to see what bash executes:
> cmd="grep -i nosuid /etc/fstab | grep -iq nfs"
> set -x
> $cmd
+ grep -i nosuid /etc/fstab '|' grep -iq nfs
as you can see your pipe | is passed as an argument to the first grep command.

looping grep function

So I was building a script for a co-worker so she can easily scan files for occurrences of strings. But I am having trouble with my grep command.
#!/bin/bash -x
filepath() {
echo -n "Please enter the path of the folder you would like to scan, then press [ENTER]: "
read path
filepath=$path
}
filename () {
echo -n "Please enter the path/filename you would like the output saved to, then press [ENTER]: "
read outputfile
fileoutput=$outputfile
touch $outputfile
}
searchstring () {
echo -n "Please enter the string you would like to seach for, then press [ENTER]: "
read searchstring
string=$searchstring
}
codeblock() {
for i in $(ls "${filepath}")
do
grep "'${string}'" "$i" | wc -l | sed "s/$/ occurance(s) in "${i}" /g" >> "${fileoutput}"
done
}
filepath
filename
searchstring
codeblock
exit
I know there are a lot of extra variable "redirects" Just practicing my scripting. Here is the error I am receiving when I run the command.
+ for i in '$(ls "${filepath}")'
+ grep ''\''<OutageType>'\''' *filename*.DONE.xml
+ wc -l
+ sed 's/$/ occurance(s) in *filename*.DONE.xml /g'
grep: *filename*.DONE.xml: No such file or directory
However if I run the grep command with the wc and sed functions from CLI it works fine.
# grep '<OutageNumber>' "*filename*.DONE.xml" | wc -l | sed "s/$/ occurance(s) in "*filename*.DONE.xml" /g"
13766 occurance(s) in *filename*.DONE.xml
There are several things going wrong here.
for i in $(ls "${filepath}")
The value of filepath is *filename*.DONE.xml, and if you assume that the * get expanded there, that won't happen. A double-quoted string variable is taken literally by the shell, the * will not get expanded.
If you want wildcard characters to be expanded to match filename patterns,
then you cannot double-quote the variable in the command.
Next, it's strongly discouraged to parse the output of the ls command. This would be better:
for i in ${filepath}
And this still won't be "perfect', because if there are no files matching the pattern,
then grep will fail. To avoid that, you could enable the nullglob option:
shopt -s nullglob
for i in ${filepath}
Finally, I suggest to eliminate this for loop,
and use the grep command directly:
grep "'${string}'" ${filepath} | ...

Parsing timestamp using sed and embedded command

There's a file with some lines containing some text and either date or time stamp:
...
string1-20141001
string2-1414368000000
string3-1414454400000
...
I want to quickly convert time stamps to dates, like this:
$ date -d #1414368000 +"%Y%m%d"
20141027
and I want to do this dynamically with sed or some similar command line tool. For testing I unsuccessfully use this:
$ echo "something-1414454400000" | sed "s/-\(..........\)...$/-$(date -d #\\1 +'%Y%m%d')/"
date: invalid date '#\\1'
something-
but echoing seems to be working:
$ echo "something-1414454400000" | sed "s/-\(..........\)...$/-$(echo \\1)/"
something-1414454400
so what could be done?
It's interesting what's happening here. Some pointers:
Always single-quote your regex for sed, if possible, when using BASH (etc), especially if using special characters like$. This is why date is being run (with -d #\\1) before sed even gets involved.
Your "working" echo example isn't, actually (I believe): echo \\1 produces \1 (and as above, will do so before sed even gets invoked). This then happens to valid sed replacement syntax, so will substitute your group on the LHS, which is why the output looks about right.
Note that by using -r, you can use easier / more advanced regex syntax.
Hard to say exactly what to do without a bit more context, but to fix the immediate problems, try something like:
echo "something-1414454400000" | sed -re 's/-([0-9]{10,}).+/-$(date -d #\1 +"%Y%m%d")/'
which produces: $(date -d #1414454400) (which you can then pipe to sh)
Or for a more complete solution, you can change the regex to produce a shell command directly, and pipe it:
echo "something-1414454400000" | sed -re 's/(.*-)([0-9]{10,10}).+/echo \1$(date -d #\2 \"+%Y%M%d\")/' | sh
..producing something-20140028
You can do this in BASH:
while read -r p; do
if [[ "$p" =~ ^(.+-)([0-9]{10}).{3}$ ]]; then
echo -n "${BASH_REMATCH[1]}"
date -d "#${BASH_REMATCH[2]}" +"%Y%m%d"
else
echo "$p"
fi
done < file
OUTPUT:
string1-20141001
string2-20141026
string3-20141027
awk -F- 'BEGIN { OFS=FS }
$2 ~ /^[0-9]{13}$/ {
"date -d#" $2/1000 " +%Y%m%d " | getline t; $2=t }1'
Just try this command. I have checked it. It is working on your inputs.
cat file | sed -E "s,(.*)-(.*),\1-`date -d #1414368000 +'%Y%m%d'`,g"

Resources