Bash script 'sed: first RE may not be empty' error - bash

I have written the following bash script, it is not finished yet so it is still a little messy. The script looks for directories at the same level as the script, it then searches for a particular file within the directory which it makes some changes to.
When I run the script it returns the following error:
sed: first RE may not be empty
sed: first RE may not be empty
sed: first RE may not be empty
sed: first RE may not be empty
sed: first RE may not be empty
sed: first RE may not be empty
sed: first RE may not be empty
My research tells me that it may be something to do with the '/'s in the directory name strings but I have not been able to solve the issue.
Despite the error messages the script seems to be working fine and is making the changes to the files correctly. Can anyone help explain why I am getting the error message above?
#!/bin/bash
FIND_DIRECTORIES=$(find . -type d -maxdepth 1 -mindepth 1)
FIND_IN_DIRECTORIES=$(find $FIND_DIRECTORIES"/app/design/adminhtml" -name "login.phtml")
for i in $FIND_IN_DIRECTORIES
do
# Generate Random Number
RANDOM=$[ ( $RANDOM % 1000 ) + 1 ]
# Find the line where password is printed out on the page
# Grep for the whole line, then remove all but the numbers
# This will leave the old password number
OLD_NUM_HOLDER=$(cat $i | grep "<?php echo Mage::helper('adminhtml')->__('Password: ')" )
OLD_NUM="${OLD_NUM_HOLDER//[!0-9]}"
# Add old and new number to the end of text string
# Beginning text string is used so that sed can find
# Replace old number with new number
OLD_NUM_FULL="password\" ?><?php echo \""$OLD_NUM
NEW_NUM_FULL="password\" ?><?php echo \""$RANDOM
sed -ie "s/$OLD_NUM_FULL/$NEW_NUM_FULL/g" $i
# GREP for the setNewPassword function line
# GREP for new password that has just been set above
SET_NEW_GREP=$(cat $i | grep "setNewPassword(" )
NEW_NUM_GREP=$(cat $i | grep "<?php echo \"(password\" ?><?php echo" )
NEW_NUM_GREPP="${NEW_NUM_GREP//[!0-9]}"
# Add new password to string for sed
# Find and replace old password for setNewPassword function
FULL_NEW_PASS="\$user->setNewPassword(password"$NEW_NUM_GREPP")"
sed -ie "s/$SET_NEW_GREP/$FULL_NEW_PASS/g" $i
done
Thanks in advance for any help with this.
UPDATE -- ANSWER
The issue here was that the for loop was not working as expected. I thought that it was doing /first/directory"/app/design/adminhtml" looping through and then doing /second/directory"/app/design/adminhtml" and then looping through. It was actually doing /first/directory looping through and then doing /second/directory"/app/design/adminhtml" and then looping through. So it was actually attaching the full directory path to the last item in the iteration. I have fixed the issue in the script below:
#!/bin/bash
for i in $(find . -type d -maxdepth 1 -mindepth 1); do
FIND_IN_DIRECTORIES=$i"/app/design/adminhtml/default"
FIND_IN_DIRECTORIES=$(find $FIND_IN_DIRECTORIES -name "login.phtml")
# Generate Random Number
RANDOM=$[ ( $RANDOM % 1000 ) + 1 ]
# Find the line where password is printed out on the page
# Grep for the whole line, then remove all but the numbers
# This will leave the old password number
OLD_NUM_HOLDER=$(cat $FIND_IN_DIRECTORIES | grep "<?php echo Mage::helper('adminhtml')->__('Password: ')" )
OLD_NUM="${OLD_NUM_HOLDER//[!0-9]}"
# Add old and new number to the end of text string
# Beginning text string is used so that sed can find
# Replace old number with new number
OLD_NUM_FULL="password\" ?><?php echo \""$OLD_NUM
NEW_NUM_FULL="password\" ?><?php echo \""$RANDOM
sed -ie "s/$OLD_NUM_FULL/$NEW_NUM_FULL/g" $FIND_IN_DIRECTORIES
# GREP for the setNewPassword function line
# GREP for new password that has just been set above
SET_NEW_GREP=$(cat $FIND_IN_DIRECTORIES | grep "setNewPassword(" )
NEW_NUM_GREP=$(cat $FIND_IN_DIRECTORIES | grep "<?php echo \"(password\" ?><?php echo" )
NEW_NUM_GREPP="${NEW_NUM_GREP//[!0-9]}"
# Add new password to string for sed
# Find and replace old password for setNewPassword function
FULL_NEW_PASS="\$user->setNewPassword(password"$NEW_NUM_GREPP")"
sed -ie "s/$SET_NEW_GREP/$FULL_NEW_PASS/g" $FIND_IN_DIRECTORIES
done

without debugging your whole setup, note that you can use an alternate character to delimit sed reg-ex/match values, i.e.
sed -i "s\#$OLD_NUM_FULL#$NEW_NUM_FULL#g" $i
and
sed -i "s\#$SET_NEW_GREP#$FULL_NEW_PASS#g" $i
You don't need the -e, so I have removed it.
Some seds require the leading '\' before the #, so I include it. It is possible that some will be confused by it, so if this doesn't work, try removing the leading '\'
you should also turn on shell debugging, to see exactly which sed (and what values) are causing the problem. Add a line with set -vx near the top of your script to turn on debugging.
I hope this helps.

Related

Convert multi-line csv to single line using Linux tools

I have a .csv file that contains double quoted multi-line fields. I need to convert the multi-line cell to a single line. It doesn't show in the sample data but I do not know which fields might be multi-line so any solution will need to check every field. I do know how many columns I'll have. The first line will also need to be skipped. I don't how much data so performance isn't a consideration.
I need something that I can run from a bash script on Linux. Preferably using tools such as awk or sed and not actual programming languages.
The data will be processed further with Logstash but it doesn't handle double quoted multi-line fields hence the need to do some pre-processing.
I tried something like this and it kind of works on one row but fails on multiple rows.
sed -e :0 -e '/,.*,.*,.*,.*,/b' -e N -e '1n;N;N;N;s/\n/ /g' -e b0 file.csv
CSV example
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
The output I want is
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
Jane,Doe,Country City Street,67890
etc.
etc.
First my apologies for getting here 7 months late...
I came across a problem similar to yours today, with multiple fields with multi-line types. I was glad to find your question but at least for my case I have the complexity that, as more than one field is conflicting, quotes might open, close and open again on the same line... anyway, reading a lot and combining answers from different posts I came up with something like this:
First I count the quotes in a line, to do that, I take out everything but quotes and then use wc:
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
If you think of a single multi-line field, knowing if the quotes are 1 or 2 is enough. In a more generic scenario like mine I have to know if the number of quotes is odd or even to know if the line completes the record or expects more information.
To check for even or odd you can use the mod operand (%), in general:
even % 2 = 0
odd % 2 = 1
For the first line:
Odd means that the line expects more information on the next line.
Even means the line is complete.
For the subsequent lines, I have to know the status of the previous one. for instance in your sample text:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
You can say line 1 (John,Doe,"Country) has 1 quote (odd) what means the status of the record is incomplete or open.
When you go to line 2, there is no quote (even). Nevertheless this does not mean the record is complete, you have to consider the previous status... so for the lines following the first one it will be:
Odd means that record status toggles (incomplete to complete).
Even means that record status remains as the previous line.
What I did was looping line by line while carrying the status of the last line to the next one:
incomplete=0
cat file.csv | while read line; do
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
incomplete=$((($quotes+$incomplete)%2)) # Check if Odd or Even to decide status
if [ $incomplete -eq 1 ]; then
echo -n "$line " >> new.csv # If line is incomplete join with next
else
echo "$line" >> new.csv # If line completes the record finish
fi
done
Once this was executed, a file in your format generates a new.csv like this:
First name,Last name,Address,ZIP
John,Doe,"Country City Street",12345
I like one-liners as much as everyone, I wrote that script just for the sake of clarity, you can - arguably - write it in one line like:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
I would appreciate it if you could go back to your example and see if this works for your case (which you most likely already solved). Hopefully this can still help someone else down the road...
Recovering the multi-line fields
Every need is different, in my case I wanted the records in one line to further process the csv to add some bash-extracted data, but I would like to keep the csv as it was. To accomplish that, instead of joining the lines with a space I used a code - likely unique - that I could then search and replace:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l ~newline~ " || echo "$l";done >new.csv
the code is ~newline~, this is totally arbitrary of course.
Then, after doing my processing, I took the csv text file and replaced the coded newlines with real newlines:
sed -i 's/ ~newline~ /\n/g' new.csv
References:
Ternary operator: https://stackoverflow.com/a/3953666/6316852
Count char occurrences: https://stackoverflow.com/a/41119233/6316852
Other peculiar cases: https://www.linuxquestions.org/questions/programming-9/complex-bash-string-substitution-of-csv-file-with-multiline-data-937179/
TL;DR
Run this:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
... and collect results in new.csv
I hope it helps!
If Perl is your option, please try the following:
perl -e '
while (<>) {
$str .= $_;
}
while ($str =~ /("(("")|[^"])*")|((^|(?<=,))[^,]*((?=,)|$))/g) {
if (($el = $&) =~ /^".*"$/s) {
$el =~ s/^"//s; $el =~ s/"$//s;
$el =~ s/""/"/g;
$el =~ s/\s+(?!$)/ /g;
}
push(#ary, $el);
}
foreach (#ary) {
print /\n$/ ? "$_" : "$_,";
}' sample.csv
sample.csv:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
John,Doe,"Country
City
Street",67890
Result:
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
John,Doe,Country City Street,67890
This might work for you (GNU sed):
sed ':a;s/[^,]\+/&/4;tb;N;ba;:b;s/\n\+/ /g;s/"//g' file
Test each line to see that it contains the correct number of fields (in the example that was 4). If there are not enough fields, append the next line and repeat the test. Otherwise, replace the newline(s) by spaces and finally remove the "'s.
N.B. This may be fraught with problems such as ,'s between "'s and quoted "'s.
Try cat -v file.csv. When the file was made with Excel, you might have some luck: When the newlines in a field are a simple \n and the newline at the end is a \r\n (which will look like ^M), parsing is simple.
# delete all newlines and replace the ^M with a new newline.
tr -d "\n" < file.csv| tr "\r" "\n"
# Above two steps with one command
tr "\n\r" " \n" < file.csv
When you want a space between the joined line, you need an additional step.
tr "\n\r" " \n" < file.csv | sed '2,$ s/^ //'
EDIT: #sjaak commented this didn't work is his case.
When your broken lines also have ^M you still can be a lucky (wo-)man.
When your broken field is always the first field in double quotes and you have GNU sed 4.2.2, you can join 2 lines when the first line has exactly one double quote.
sed -rz ':a;s/(\n|^)([^"]*)"([^"]*)\n/\1\2"\3 /;ta' file.csv
Explanation:
-z don't use \n as line endings
:a label for repeating the step after successful replacement
(\n|^) Search after a newline or the very first line
([^"]*) Substring without a "
ta Go back to label a and repeat
awk pattern matching is working.
answer in one line :
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile
if you'd like to drop quotes, you could use:
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile | sed 's/"//gw NewFile'
but I prefer to keep it.
to explain the code:
/Pattern/ : find pattern in current line.
ORS : indicates the output line record.
$0 : indicates the whole of the current line.
's/OldPattern/NewPattern/': substitude first OldPattern with NewPattern
/g : does the previous action for all OldPattern
/w : write the result to Newfile

How to process tr across all files in a directory and output to a different name in another directory?

mpu3$ echo * | xargs -n 1 -I {} | tr "|" "/n"
which outputs:
#.txt
ag.txt
bg.txt
bh.txt
bi.txt
bid.txt
dh.txt
dw.txt
er.txt
ha.txt
jo.txt
kc.txt
lfr.txt
lg.txt
ng.txt
pb.txt
r-c.txt
rj.txt
rw.txt
se.txt
sh.txt
vr.txt
wa.txt
is what I have so far. What is missing is the output; I get none. What I really want is to get a list of txt files, use their name up to the extension, process out the "|" and replace it with a LF/CR and put the new file in another directory as [old-name].ics. HALP. THX in advance. - Idiot me.
You can loop over the files and use sed to process the file:
for i in *.txt; do
sed -e 's/|/\n/g' "$i" > other_directory/"${i%.txt}".ics
done
No need to use xargs, especially with echo which would risk the filenames getting word split and having globbing apply to them, so could well do the wrong thing.
Then we use sed and use s to substitute | with \n g makes it a global replace. We redirect that to the other director you want and use bash's parameter expansion to strip off the .txt from the end
Here's an awk solution:
$ awk '
FNR==1 { # for first record of every file
close(f) # close previous file f
f="path_to_dir/" FILENAME # new filename with path
sub(/txt$/,"ics",f) } # replace txt with ics
{
gsub(/\|/,"\n") # replace | with \n
print > f }' *.txt # print to new file

How do I recursively replace part of a string with another given string in bash?

I need to write bash script that converts a string of only integers "intString" to :id. intString always exists after /, may never contain any other types (create_step2 is not a valid intString), and may end at either a second / or end of line. intString may be any 1-8 characters. Script needs to be repeated for every line in a given file.
For example:
/sample/123456/url should be converted to /sample/:id/url
and /sample_url/9 should be converted to /sampleurl/:id however /sample_url_2/ should remain the same.
Any help would be appreciated!
It seems like the long way around the problem to go recursive but then I don't know what problem you are solving. It seems like a good sed command like
sed -E 's/\/[0-9]{1,}/\/:id/g'
could do it in one shot, but if you insist on being recursive then it might go something like this ...
#!/bin/bash
function restring()
{
s="$1"
s="$(echo $s | sed -E 's/\/[0-9]{1,}/\/:id/')"
if ( echo $s | grep -E '\/[0-9]{1,}' > /dev/null ) ; then
restring $s
else
echo $s
exit
fi
echo $s
}
restring "$1"
now run it
$ ./restring.sh "/foo/123/bar/456/baz/45435/andstuff"
/foo/:id/bar/:id/baz/:id/andstuff

UNIX - Replacing variables in sql with matching values from .profile file

I am trying to write a shell which will take an SQL file as input. Example SQL file:
SELECT *
FROM %%DB.TBL_%%TBLEXT
WHERE CITY = '%%CITY'
Now the script should extract all variables, which in this case everything starting with %%. So the output file will be something as below:
%%DB
%%TBLEXT
%%CITY
Now I should be able to extract the matching values from the user's .profile file for these variables and create the SQL file with the proper values.
SELECT *
FROM tempdb.TBL_abc
WHERE CITY = 'Chicago'
As of now I am trying to generate the file1 which will contain all the variables. Below code sample -
sed "s/[(),']//g" "T:/work/shell/sqlfile1.sql" | awk '/%%/{print $NF}' | awk '/%%/{print $NF}' > sqltemp2.sql
takes me till
%%DB.TBL_%%TBLEXT
%%CITY
Can someone help me in getting to file1 listing the variables?
You can use grep and sort to get a list of unique variables, as per the following transcript:
$ echo "SELECT *
FROM %%DB.TBL_%%TBLEXT
WHERE CITY = '%%CITY'" | grep -o '%%[A-Za-z0-9_]*' | sort -u
%%CITY
%%DB
%%TBLEXT
The -o flag to grep instructs it to only print the matching parts of lines rather than the entire line, and also outputs each matching part on a distinct line. Then sort -u just makes sure there are no duplicates.
In terms of the full process, here's a slight modification to a bash script I've used for similar purposes:
# Define all translations.
declare -A xlat
xlat['%%DB']='tempdb'
xlat['%%TBLEXT']='abc'
xlat['%%CITY']='Chicago'
# Check all variables in input file.
okay=1
for key in $(grep -o '%%[A-Za-z0-9_]*' input.sql | sort -u) ; do
if [[ "${xlat[$key]}" == "" ]] ; then
echo "Bad key ($key) in file:"
grep -n "${key}" input.sql | sed 's/^/ /'
okay=0
fi
done
if [[ ${okay} -eq 0 ]] ; then
exit 1
fi
# Process input file doing substitutions. Fairly
# primitive use of sed, must change to use sed -i
# at some point.
# Note we sort keys based on descending length so we
# correctly handle extensions like "NAME" and "NAMESPACE",
# doing the longer ones first makes it work properly.
cp input.sql output.sql
for key in $( (
for key in ${!xlat[#]} ; do
echo ${key}
done
) | awk '{print length($0)":"$0}' | sort -rnu | cut -d':' -f2) ; do
sed "s/${key}/${xlat[$key]}/g" output.sql >output2.sql
mv output2.sql output.sql
done
cat output.sql
It first checks that the input file doesn't contain any keys not found in the translation array. Then it applies sed substitutions to the input file, one per translation, to ensure all keys are substituted with their respective values.
This should be a good start, though there may be some edge cases such as if your keys or values contain characters sed would consider important (like / for example). If that is the case, you'll probably need to escape them such as changing:
xlat['%%UNDEFINED']='0/0'
into:
xlat['%%UNDEFINED']='0\/0'

Why is this bash for loop slow?

I am trying to this code:
for f in jobs/UPDTEST/apples* ; do
nf=`echo $f | sed s:jobs\/::g`
echo $nf | tr '_' ' '
done > jobs
There are 750 apples* type text files. But as I am only messing with the file name - I would have thought it should be quick - but take about 5 mins.
Is there an alternative way to do this?
You can use parameter expansions like ${parameter/pattern/string} to get rid of the calls to sed and tr. In your case it could look like:
for f in jobs/UPDTEST/apples*; do
f=${f//jobs\//}
echo ${f//_/ }
done > jobs
First, cd jobs would remove the need for the sed
Second, you don't need tr to substitute characters in the value of a bash variable.
Third, with find you don't need a loop at all.
f=$(cd jobs; find UPDTEST -name 'apples*' -depth 1)
echo "${f//_/ }" > jobs.log
By the way, you can't have a jobs directory and a jobs file in the same directory.

Resources