Why doesn't this use of sed command work? - bash

I'm have a bash script that I use to manipulate files on a computation cluster.
The files I am trying to manipulate are of the format:
beadSize=6.25
minBoxSize=2.2
lipids=1200
chargedLipids=60
cations=0
HEAD=0
CHEAD=-2
BODY=2
TAIL=3
ION=-1
RHO_BODY=10
RHO_TAIL=14
tol=1e-10
lb=7.1
FTsize=8
ROUNDS=1000000
ftROUNDS=10
wROUNDS=1000
dt=0.01
alpha=1
transSize=0.15
transSizeZ=0.0
ionsTransSize=2.8
ionsTransSizeZ=2.8
rotateSize=0.18
volSize=8
modSize=0.0
forceFactor=2
kappaCV=0
sysSize=26
zSize=300
iVal=1
split=0
randSeed=580
I call a function inside a loop:
for per in $(seq 70 -5 5); do
for seed in {580..583}; do
for c in {"fs","fd","bfs","bfd"}; do
let count=$count+1
startJob $per $seed $c $count
done
done
done
and the lines I use to manipulate:
let n=$1*12
echo $n
cat trm.dat | sed '/memFile*/d' | sed '/rStart*/d' | sed '/test*/d'| sed 's/modSize=[0-9.]*/modSize=0.0/' | sed 's/chachargedLipids=[0-9]*/chargedLipids="$n"/' | grep char #> propFile.dat
for $per=15, for example, I expect $n==180. However when I run the script I see:
180
chargedLipids=120
What am I doing wrong?
Note
I have also tried to use:
sed "s/chachargedLipids=[0-9]*/chargedLipids=$n/"
With the same result.

chacharged != charged, and the final sed is doing nothing. With single quotes, you would expect to see the literal text chargedLipids="$n" in your output if a replacement was being made.

Related

Parsing CSV records when a value is multiline

Source file looks like this:
"google.com", "vuln_example1
vuln_example2
vuln_example3"
"facebook.com", "vuln_example2"
"reddit.com", "stupidly_long_vuln_name1"
"stackoverflow.com", ""
I've been trying to get the output to be something like this but the line breaks seem to cause me no end of problems. I'm using a "while read line" job to do this because I do some processing on the columns (e.g Vulnerability count and url in this example). This is output into a jenkins job (yuk).
The basic summary of the problem is getting the linebreaks in the csv to be output into the third column while retaining the table structure. I've got a sort of weird example of the desired output below.
||hostname ||Vulnerability count|| Vulnerability list || URL ||
|google.com |3 |vuln_example1 |http://cve.com/vuln_example1|
| | |vuln_example2 |http://cve.com/vuln_example2|
| | |vuln_example3 |http://cve.com/vuln_example3|
|facebook.com |1 |vuln_example2 |http://cve.com/vuln_example2|
|reddit.com |1 |stupidly_long_vuln_name1 |http://cve.com/stupidly_long_vuln_name1|
|stackoverflow.com |0 | ||
Looking at this... I've got a feeling it might be easier by showing some code and example output.
Parsing your input with the command line below makes the problem easier (I'm assuming the inputs are correct):
perl -0777 -pe 's/([^"])\s*\n/\1 /g ; s/[",]//g' < sample.txt
This line invokes Perl to perform two regex substitutions:
s/([^"])\s*\n/\1 /g: This substitution removes an end of line if it doesn't terminate by a quote " (i.e. if a host entry, with all vulnerabilities isn't yet complete).
s/[",]//g removes all quotes and commas remaining.
For each host entry like this one:
"google.com", "vuln_example1
vuln_example2
vuln_example3"
You'll get:
google.com vuln_example1 vuln_example2 vuln_example3
Then you can assume for each line, you have an host and a set of vulnerabilities.
The given example below stores vulnerabilities in an array and loop through it, formatting and printing each line:
# Replace this by your custom function
# to get an URL for a given vulnerability
function get_vuln_url () {
# This just displays a random url for an non-empty arg
[[ -z "$1" ]] || echo "http://host/$1.htm"
}
# Format your line (see printf help)
function print_row () {
printf "%-20s|%5s|%-30s|%s\n" "$#"
}
# The perl line reformat
perl -0777 -pe 's/([^"])\s*\n/\1 /g ; s/[",]//g' < sample.txt |
while read -r line ; do
arr=(${line})
print_row "${arr[0]}" "$((${#arr[#]} - 1))" "${arr[1]}" "$(get_vuln_url ${arr[1]})"
#echo -e "${arr[0]}\t|$vul_count\t|${arr[1]}\t|$(get_vuln_url ${arr[1]})"
for v in "${arr[#]:2}" ; do
print_row " " " " "$v" "$(get_vuln_url ${arr[1]})"
done
done
Output:
google.com | 3|vuln_example1 |http://host/vuln_example1.htm
| |vuln_example2 |http://host/vuln_example1.htm
| |vuln_example3 |http://host/vuln_example1.htm
facebook.com | 1|vuln_example2 |http://host/vuln_example2.htm
reddit.com | 1|stupidly_long_vuln_name1 |http://host/stupidly_long_vuln_name1.htm
stackoverflow.com | 0| |
Update.
If you don't have Perl, and if your file doesn't have tabulations, you can use this command as a workaround instead:
tr '\n' '\t' < sample.txt | sed -r -e 's/([^"])\s*\t/\1 /g' -e 's/[",]//g' -e 's/\t/\n/g'
tr '\n' '\t' replaces all ends of line by tabulations
sed part acts like Perl line, except it deals with tabulations instead of ends of line and restores tabulations back to ends of line.

Bash capturing in brace expansion

What would be the best way to use something like a capturing group in regex for brace expansion. For example:
touch {1,2,3,4,5}myfile{1,2,3,4,5}.txt
results in all permutations of the numbers and 25 different files. But in case I just want to have files like 1myfile1.txt, 2myfile2.txt,... with the first and second number the same, this obviously doesn't work. Therefore I'm wondering what would be the best way to do this?
I'm thinking about something like capturing the first number, and using it a second time. Ideally without a trivial loop.
Thanks!
Not using a regex but a for loop and sequence (seq) you get the same result:
for i in $(seq 1 5); do touch ${i}myfile${i}.txt; done
Or tidier:
for i in $(seq 1 5);
do
touch ${i}myfile${i}.txt;
done
As an example, using echo instead of touch:
➜ for i in $(seq 1 5); do echo ${i}myfile${i}.txt; done
1myfile1.txt
2myfile2.txt
3myfile3.txt
4myfile4.txt
5myfile5.txt
Variation on MTwarog's answer with one less pipe/subprocess:
$ echo {1..5} | tr ' ' '\n' | xargs -I '{}' touch {}myfile{}.txt
$ ls -1 *myfile*
1myfile1.txt
2myfile2.txt
3myfile3.txt
4myfile4.txt
5myfile5.txt
You can use AWK to do that:
echo {1..5} | tr ' ' '\n' | awk '{print $1"filename"$1".txt"}' | xargs touch
Explanation:
echo {1..5} - prints range of numbers
tr ' ' '\n' - splits numbers to separate lines
awk '{print $1"filename"$1}' - enables you to format output using previously printed numbers
xargs touch - passes filenames to touch command (creates files)

How to append lots of variables to one variable with a simple command

I want to stick all the variables into one variable
A=('blah')
AA=('blah2')
AAA=('blah3')
AAB=('blah4')
AAC=('blah5')
#^^lets pretend theres 100 more of these ^^
#Variable composition
#after AAA, is AAB then AAC then AAD etc etc, does that 100 times
I want them all placed into this MASTER variable
#MASTER=${A}${AA}${AAA} (<-- insert AAB, AAC and 100 more variables here)
I obviously don't want to type 100 variables in this expression because there's probably an easier way to do this. Plus I'm gonna be doing more of these therefore I need it automated.
I'm relatively new to sed, awk, is there a way to append those 100 variables into the master variable?
For this specific purpose I DO NOT want an array.
You can use a simple one-liner, quite straightforward, though more expensive:
master=$(set | grep -E '^(A|AA|A[A-D][A-D])=' | sort | cut -f2- -d= | tr -d '\n')
set lists all the variables in var=name format
grep filters out the variables we need
sort puts them in the right order (probably optional since set gives a sorted output)
cut extracts the values, removing the variable names
tr removes the newlines
Let's test it.
A=1
AA=2
AAA=3
AAB=4
AAC=5
AAD=6
AAAA=99 # just to make sure we don't pick this one up
master=$(set | grep -E '^(A|AA|A[A-D][A-D])=' | sort | cut -f2- -d= | tr -d '\n')
echo "$master"
Output:
123456
With my best guess, how about:
#!/bin/bash
A=('blah')
AA=('blah2')
AAA=('blah3')
AAB=('blah4')
AAC=('blah5')
# to be continued ..
for varname in A AA A{A..D}{A..Z}; do
value=${!varname}
if [ -n "$value" ]; then
MASTER+=$value
fi
done
echo $MASTER
which yields:
blahblah2blah3blah4blah5...
Although I'm not sure whether this is what the OP wants.
echo {a..z}{a..z}{a..z} | tr ' ' '\n' | head -n 100 | tail -n 3
adt
adu
adv
tells us, that it would go from AAA to ADV to reach 100, or for ADY for 103.
echo A{A..D}{A..Z} | sed 's/ /}${/g'
AAA}${AAB}${AAC}${AAD}${AAE}${AAF}${AAG}${AAH}${AAI}${AAJ}${AAK}${AAL}${AAM}${AAN}${AAO}${AAP}${AAQ}${AAR}${AAS}${AAT}${AAU}${AAV}${AAW}${AAX}${AAY}${AAZ}${ABA}${ABB}${ABC}${ABD}${ABE}${ABF}${ABG}${ABH}${ABI}${ABJ}${ABK}${ABL}${ABM}${ABN}${ABO}${ABP}${ABQ}${ABR}${ABS}${ABT}${ABU}${ABV}${ABW}${ABX}${ABY}${ABZ}${ACA}${ACB}${ACC}${ACD}${ACE}${ACF}${ACG}${ACH}${ACI}${ACJ}${ACK}${ACL}${ACM}${ACN}${ACO}${ACP}${ACQ}${ACR}${ACS}${ACT}${ACU}${ACV}${ACW}${ACX}${ACY}${ACZ}${ADA}${ADB}${ADC}${ADD}${ADE}${ADF}${ADG}${ADH}${ADI}${ADJ}${ADK}${ADL}${ADM}${ADN}${ADO}${ADP}${ADQ}${ADR}${ADS}${ADT}${ADU}${ADV}${ADW}${ADX}${ADY}${ADZ
The final cosmetics is easily made by hand.
One-liner using a for loop:
for n in A AA A{A..D}{A..Z}; do str+="${!n}"; done; echo ${str}
Output:
blahblah2blah3blah4blah5
Say you have the input file inputfile.txt with arbitrary variable names and values:
name="Joe"
last="Doe"
A="blah"
AA="blah2
then do:
master=$(eval echo $(grep -o "^[^=]\+" inputfile.txt | sed 's/^/\$/;:a;N;$!ba;s/\n/$/g'))
This will concatenate the values of all variables in inputfile.txt into master variable. So you will have:
>echo $master
JoeDoeblahblah2

How to use sed to extract a string [duplicate]

This question already has answers here:
BASH extract value after string in variable Not file [duplicate]
(2 answers)
Closed last year.
I need to extract a number from the output of a command: cmd. The output is type: 1000
So my question is how to execute the command, store its output in a variable and extract 1000 in a shell script. Also how do you store the extracted string in a variable?
This question has been answered in pieces here before, it would be something like this:
line=$(sed -n '2p' myfile)
echo "$line"
if [ `echo $line || grep 'type: 1000' ` ] then;
echo "It's there!";
fi;
Store output of sed into a variable
String contains in Bash
EDIT: sed is very limited, you would need to use bash, perl or awk for what you need.
This is a typical use case for grep:
output=$(cmd | grep -o '[0-9]\+')
You can write the output of a command or even a pipeline of commands into a shell variable using so called command substitution:
variable=$(cmd);
In comments it appeared that the output of cmd contains more lines than the type : 1000. In this case I would suggest sed:
output=$(cmd | sed -n 's/type : \([0-9]\+\)/\1/p;q')
You tagged your question as sed but your question description does not restrict other tools, so here's a solution using awk.
output = `cmd | awk -F':' '/type: [0-9]+/{print $2}'`
Alternatively, you can use the newer $( ) syntax. Some find the newer syntax preferable and it can be conveniently nested, without the need for escaping backtics.
output = $(cmd | awk -F':' '/type: [0-9]+/{print $2}')
If the output is rigidly restricted to "type: " followed by a number, you can just use cut.
var=$(echo 'type: 1000' | cut -f 2 -d ' ')
Obviously you'll have to pipe the output of your command to cut, I'm using echo as a demo.
In addition, I'd use grep and then cut if the string you are searching is more complex. If we assume there can be all kind of numbers in the text, but only one occurrence of "type: " followed by a number, you can use the command:
>> var=$(echo "hello 12 type: 1000 foo 1001" | grep -oE "type: [0-9]+" | cut -f 2 -d ' ')
>> echo $var
1000
You can use the | operator to send the output of one command to another, like so:
echo " 1\n 2\n 3\n" | grep "2"
This sends the string " 1\n 2\n 3\n" to the grep command, which will search for the line containing 2. It sound like you might want to do something like:
cmd | grep "type"
Here is a plain sed solution that uses a regualar expression to find the number in your string:
cmd | sed 's/^.*type: \([0-9]\+\)/\1/g'
^ means from the start
.* can be any character (also none)
\([0-9]\+\) are numbers (minimum one character)
\1 means it takes the first pattern it finds (and only in this case) and uses it as replacement for the whole string

How to concatenate stdin and a string?

How to I concatenate stdin to a string, like this?
echo "input" | COMMAND "string"
and get
inputstring
A bit hacky, but this might be the shortest way to do what you asked in the question (use a pipe to accept stdout from echo "input" as stdin to another process / command:
echo "input" | awk '{print $1"string"}'
Output:
inputstring
What task are you exactly trying to accomplish? More context can get you more direction on a better solution.
Update - responding to comment:
#NoamRoss
The more idiomatic way of doing what you want is then:
echo 'http://dx.doi.org/'"$(pbpaste)"
The $(...) syntax is called command substitution. In short, it executes the commands enclosed in a new subshell, and substitutes the its stdout output to where the $(...) was invoked in the parent shell. So you would get, in effect:
echo 'http://dx.doi.org/'"rsif.2012.0125"
use cat - to read from stdin, and put it in $() to throw away the trailing newline
echo input | COMMAND "$(cat -)string"
However why don't you drop the pipe and grab the output of the left side in a command substitution:
COMMAND "$(echo input)string"
I'm often using pipes, so this tends to be an easy way to prefix and suffix stdin:
echo -n "my standard in" | cat <(echo -n "prefix... ") - <(echo " ...suffix")
prefix... my standard in ...suffix
There are some ways of accomplish this, i personally think the best is:
echo input | while read line; do echo $line string; done
Another can be by substituting "$" (end of line character) with "string" in a sed command:
echo input | sed "s/$/ string/g"
Why i prefer the former? Because it concatenates a string to stdin instantly, for example with the following command:
(echo input_one ;sleep 5; echo input_two ) | while read line; do echo $line string; done
you get immediatly the first output:
input_one string
and then after 5 seconds you get the other echo:
input_two string
On the other hand using "sed" first it performs all the content of the parenthesis and then it gives it to "sed", so the command
(echo input_one ;sleep 5; echo input_two ) | sed "s/$/ string/g"
will output both the lines
input_one string
input_two string
after 5 seconds.
This can be very useful in cases you are performing calls to functions which takes a long time to complete and want to be continuously updated about the output of the function.
You can do it with sed:
seq 5 | sed '$a\6'
seq 5 | sed '$ s/.*/\0 6/'
In your example:
echo input | sed 's/.*/\0string/'
I know this is a few years late, but you can accomplish this with the xargs -J option:
echo "input" | xargs -J "%" echo "%" "string"
And since it is xargs, you can do this on multiple lines of a file at once. If the file 'names' has three lines, like:
Adam
Bob
Charlie
You could do:
cat names | xargs -n 1 -J "%" echo "I like" "%" "because he is nice"
Also works:
seq -w 0 100 | xargs -I {} echo "string "{}
Will generate strings like:
string 000
string 001
string 002
string 003
string 004
...
The command you posted would take the string "input" use it as COMMAND's stdin stream, which would not produce the results you are looking for unless COMMAND first printed out the contents of its stdin and then printed out its command line arguments.
It seems like what you want to do is more close to command substitution.
http://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html#Command-Substitution
With command substitution you can have a commandline like this:
echo input `COMMAND "string"`
This will first evaluate COMMAND with "string" as input, and then expand the results of that commands execution onto a line, replacing what's between the ‘`’ characters.
cat will be my choice: ls | cat - <(echo new line)
With perl
echo "input" | perl -ne 'print "prefix $_"'
Output:
prefix input
A solution using sd (basically a modern sed; much easier to use IMO):
# replace '$' (end of string marker) with 'Ipsum'
# the `e` flag disables multi-line matching (treats all lines as one)
$ echo "Lorem" | sd --flags e '$' 'Ipsum'
Lorem
Ipsum#no new line here
You might observe that Ipsum appears on a new line, and the output is missing a \n. The reason is echo's output ends in a \n, and you didn't tell sd to add a new \n. sd is technically correct because it's doing exactly what you are asking it to do and nothing else.
However this may not be what you want, so instead you can do this:
# replace '\n$' (new line, immediately followed by end of string) by 'Ipsum\n'
# don't forget to re-add the `\n` that you removed (if you want it)
$ echo "Lorem" | sd --flags e '\n$' 'Ipsum\n'
LoremIpsum
If you have a multi-line string, but you want to append to the end of each individual line:
$ ls
foo bar baz
$ ls | sd '\n' '/file\n'
bar/file
baz/file
foo/file
I want to prepend my sql script with "set" statement before running it.
So I echo the "set" instruction, then pipe it to cat. Command cat takes two parameters : STDIN marked as "-" and my sql file, cat joins both of them to one output. Next I pass the result to mysql command to run it as a script.
echo "set #ZERO_PRODUCTS_DISPLAY='$ZERO_PRODUCTS_DISPLAY';" | cat - sql/test_parameter.sql | mysql
p.s. mysql login and password stored in .my.cnf file

Resources