I'm trying to do some manipulation with Wordpress and I'm trying to write a script for it...
# cat /usr/local/uftwf/_wr.sh
#!/bin/sh
# $Id$
#
table_prefix=`grep ^\$table_prefix wp-config.php | awk -F\' '{print $2}'`
echo $table_prefix
#
Yet I'm getting following output
# /usr/local/uftwf/_wr.sh
ABSPATH ABSPATH wp-settings.php_KEY LOGGED_IN_KEY NONCE_KEY AUTH_SALT SECURE_AUTH_SALT LOGGED_IN_SALT NONCE_SALT wp_0zw2h5_ de_DE WPLANG WP_DEBUG s all, stop editing! Happy blogging. */
#
Running from command line, I get the correct output that I'm looking for:
# grep ^\$table_prefix wp-config.php | awk -F\' '{print $2}'
wp_0zw2h5_
#
What is going wrong in the script?
The problem is the grep command:
table_prefix=`grep ^\$table_prefix wp-config.php | awk -F\' '{print $2}'`
It either needs three backslashes - not one - or you need to use single quotes (which is much simpler):
table_prefix=$(grep '^$table_prefix' wp-config.php | awk -F"'" '{print $2}')
It's also worth using the $( ... ) notation in general.
The trouble is that the backquotes removes the backslash, so the shell variable is evaluated, and what's passed to grep is, most likely, just ^, and each line starts with a beginning of line.
This has all the appearance as though the grep is not omitting all the lines that are not matching, when you issue the echo $table_prefix without quotes it collapses all the white space into a single output line, if you issue an: echo "$table_prefix", you would see the match with all the other white-space that was output.
I'd recommend the following sed expression instead:
table_prefix=$(sed -n "s/^\$table_prefix.*'\([^']*\)'.*/\1/p" wp-config.php)
You should try
#!/bin/sh
table_prefix=$(awk -F"'" '/^\$table_prefix/{print $2}' wp-config.php)
echo $table_prefix
Does this one work for you?
awk -F\' '/^\$table_prefix/ {print $2}' wp-config.php
Update
If you are using shell scripting, there is no need to call up awk, grep:
#!/bin/sh
while read varName op varValue theRest
do
if [ "_$varName" = "_\$table_prefix" ]
then
table_prefix=${varValue//\'/} # Remove the single quotes
table_prefix=${table_prefix/;/} # Remove the semicolon
break
fi
done < wp-config.php
echo "Found: $table_prefix"
Related
I run the command
df -gP /data1 /data2 | grep -v File | awk '{print $1}' |
awk -F/dev/ '$0=$2' | tr '\n' '
on the AIX shell (ksh) and it prints the output below:
lv_data01 lv_data02 root#testhost:/
However, I would like the output to be printed this way. Could someone help?
lv_data01 lv_data02
Using grep … | awk … | awk … is not necessary; a single awk could do the whole job. So could sed and it might even be easier. I'd be tempted to deal with the spacing by using:
x=$(df … | sed …); echo $x
The tr command, once corrected, replaces newlines with spaces, so the prompt follows without a newline before it. The ; echo suggestion adds the missing newline; the echo $x suggestion (note no double quotes) does too.
As for the sed command:
sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'
Don't print anything by default
If the line doesn't match File (doing the work of grep -v):
remove the first space (blank or tab) and everything after it (doing the work of awk '{print $1}')
replace everything up to /dev/ with nothing and print (doing the work of awk -F/dev/ '{$0=$2}')
The command substitution and capture, followed by echo, deals with spaces and newlines.
So, my suggested solution is:
x=$(df -gP /data1 /data2 | sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'); echo $x
You could add unset x after the echo if you are going to be using this directly in the shell and not in a shell script. If it'll be encapsulated in a shell script, you don't have to worry about it.
I'm blithely assuming the output from df -gP won't contain a path such as this, with two occurrences of /dev:
/who/knows/dev/lv_data01/dev/bin
If that's a real problem, you can fix the sed script, but I don't think it will be. It's one thing the second awk script in the question handles differently.
I have this string in a variable:
strVar="Hello World [randomSubstring].zip"
I would like to extract [randomSubstring], where that substring inside the brackets could be anything.
The expected result must be something like this:
echo "$strVar"
Hello World .zip
I tried several combinations with grep and awk but without success, I am using CentOS 7.
echo "Hello World [RNVE5Z].zip" | grep -oP '(?<=[).*(?=])'
echo "Hello World [RNVE5Z].zip" | awk -F"["" '{print $1}' | awk -F"]" '{print $2}'
Bash only:
echo ${strVar/\[*\]/}
Just use bash's substring manipulation:
echo ${strVar/\[*\]/}
I would prefer it over an external call to sed, except there is more to be done, why I use sed anyway:
echo $strVar | sed 's/\[.*\]//'
I don't think there is an elegant solution for grep but might be wrong. In awk I'm not that fluent.
There are numerous posts about removing leading white space and appending an entry to a single existing line in a file using awk. None of my attempts work - just three examples here of the many I have tried.
Say I have a file called $log with a single line
a:b:c
and I want to add a fourth entry,
awk '{ print $4"d" }' $log | tee -a $log
output seems to be a newline
`a:b:c:
d`
whereas, I want all on the same line;
a:b:c:d
try
BEGIN { FS = ":" } ; awk '{ print $4"d" }' $log | tee -a $log
or, this - avoid a new line
awk 'BEGIN { ORS=":" }; { print $4"d" }' $log | tee -a $log
no change
`a:b:c:
d`
awk is placing a space after c: and then writing d to the next line.
EDIT: | tee -a $log appears to be necessary to write the additional string to the file.
$log contains 39 variables and was generated using awk without | tee -a
odd...
The actual command to write $40 to the single line entries
awk '{ print $40"'$imagedir'" }' $log
output
+ awk '{ print $40"/home/geoland/Asterism-DEVEL/DSO" }'
/home/geoland/.asterism/log
but this does not write to the $log file.
How should I append d to the same line without leading white space using awk - also looking at sed xargs and other alternatives.
Using awk:
awk '{ print $0":d" }' file
Using sed:
sed 's/$/:d/' file
Using only bash:
while IFS= read -r line; do
echo "$line:d"
done < file
Using sed:
$ echo a:b:c | sed 's,\(^.*$\),\1:d,'
a:b:c:d
Thanks all... This is the solution I went with. I also needed to write the entire line to a perpetual log file because the log file is overwritten at each new process instance.
I will further investigate an awk solution.
logname=$imagedir/log_$name
while IFS=: read -r line; do
echo "$line$imagedir"
done < $log | tee $logname
This places $imagedir directly behind the last IFS ':' separator
There is probably room for refinement.
I too am not entirely sure what you're trying to do here.
Your command line, awk '{ print $4"d" }' $log | tee -a $log is problematic in a number of ways.
First, your awk script tries to print the 4th field, which is empty. Unless you say otherwise, fields are separated by whitespace, and the string a:b:c has no whitespace. So .. awk prints "d". And tee -a appends to your existing logfile, so what you're seeing is the original data, along with the d printed by awk. That's totally expected.
Second, it appears to have tee appending to the same file that awk is in the process of reading. This won't make an endless loop, as awk should stop reading the input file after whatever was the last byte when the file was opened, but it does mean you may have repeated data there.
Your other attempts, aside from some syntactical errors, all suffer from the same assumption that $4 means something that it does not.
The following awk snippet sets the input and output field separators to :, then sets the 4th field to "d", then prints the line.
$ echo "a:b:c" | awk 'BEGIN{FS=OFS=":"} {$4="d"} 1'
a:b:c:d
Is that what you want?
If you really do need to append this data to an existing log file, you can do so with tee -a or simple >> redirection. Just bear in mind that awk will only see the content of the file as of the time it was run, and by appending, you are not replacing lines.
One other thing. If you are actually hoping to use the content of the shell variable $imagedir inside awk, you should pass the variable in rather than exiting your quotes. For example:
$ echo "a:b:c" | awk -v d="foo/bar" 'BEGIN{FS=OFS=":"} {$4=d} 1'
a:b:c:foo/bar
sed "s|$|$imagedir|" file | tee newfile
This does the trick. Read 'file' and write the contents of 'file' with the substitution to a 'new file', so as to read the image directory when using a secondary standalone process.
Because the variable is a directory with several / these need to be escaped, so as not to interpret as sed delimiters. I had difficulty with this using a variable.
A neater option was to use an alternative delimiter. Not to be confused with the pipe that follows.
I have a CSV list that is two columns (col1 is Share Name, col2 is file system path). I need two variables for either everything BEFORE the comma, or everything AFTER the column. My issue is that either column potentially has spaces, and even though these are quoted in the output, my script isn't handling them properly.
CSV:
ShareName,/path/to/sharename
"Share with spaces",/path/to/sharewithspaces
ShareWithSpace,"/path/to/share with spaces"
I was using this awk statement to get either field 1 or field 2:
echo $line | awk -F "\"*,\"*" '{print $2}'
BUT, I soon realized that it wasn't handling the spaces properly, even when passing that command to a variable and quoting the variable.
So, then after googling my brain out, I was trying this:
echo $line | cut -d, -f2
Which works, EXCEPT when echoing the variable $line. If I echo the string, it works perfectly, but unfortunately I'm using this in a while/read/do.
I am fairly certain my issue is having to define fields and having whitespace, but I really only need before or after a comma.
Here's the stripped down version so there's no sensitive data.
#!/usr/bin/bash
ssh <ip> <command> > "2_shares.txt"
<command> > "1_shares.txt"
file1="1_shares.txt"
file2="2_shares.txt"
while read -r line
do
share=`echo "$line" | awk -F "\"*,\"*" '{print $1}'`
path=`echo "$line" | awk -F "\"*,\"*" '{print $2}'`
if grep "$path" $file2 > /dev/null;
then
:
else
echo "SHARE NEEDS CREATED FOR $line"
case $path in
*)
blah blah blah
;;
esac
fi
done < "$file1"
You could simply do like this,
awk -F',' '{print $2}' file
To skip the first line.
awk -F',' 'NR>1{print $2}' file
Your issue is simply that you aren't quoting your shell variables. ALWAYS quote shell variables unless you have a very specific reason not to and are fully aware of all of the consequences.
I strongly suspect the rest of your script is completely wrong in it's approach since you apparently didn't know to quote variables and are talking about shell loops and echoing one line at time to awk so please do post a followup question if you'd like help.
I would like to replace a variable inside the the awk command with a bash variable.
For example:
var="one two three"
echo $var | awk "{print $2}"
I want to replace the $2 with the var variable. I have tried awk -v as well as something like awk "{ print ${$wordnum} } to no avail.
Sightly different approach:
$ echo $var
one two three
$ field=3
$ echo $var | awk -v f="$field" '{print $f}'
three
$ field=2
$ echo $var | awk -v f="$field" '{print $f}'
two
You've almost got it...
$ myfield='$3'
$ echo $var | awk "{print $myfield}"
three
The hard quotes on the first line prevent interpretation of $3 by the shell. The soft quotes on the second line allow variable replacement.
You can concatenate parts of awk statements with variables. Maybe this is what you want in your script file:
echo $1|awk '{print($'$2');}'
Here the parts {print($ and the value of local variable $2 and );} are concatenated and given to awk.
EDIT: After some advice rather don't use this. Maybe as a one-time solution. It's better to get accustomed to doing it right right away - see link in first comment.