Insert block of text within variable at specific line number - bash

I am grabbing the contents of https://api.wordpress.org/secret-key/1.1/salt/ using curl and asigning it to a bash variable. I want to insert this block of text into a file at a specific line number - 10
#!/bin/bash
salts=$(curl -s https://api.wordpress.org/secret-key/1.1/salt/)
sed -i '10i '"$salts"'' myfile.php
however I keep getting the error
sed: -e expression #1, char 102: extra characters after command
I think there maybe carriage return chars in the returned payload using curl but i'm not sure. I have tried using tr to replace them with \n but am unsure how to use it in this situation.
I have looked at multiple existing questions but I cannot get them to work in my situation. I don't have to use sed for this.

using awk
awk -v "s=$salts" 'NR==10{print s} 1' file
you could also combine head and tail
echo -e "$( head -9 file )\n$salts\n$( tail -n +10 file )"
boh of them seem to work with your variable that contains a lot of special characters.

Related

bash scripting: Can I get sed to output the original line, then a space, then the modified line?

I'm new to Unix in all its forms, so please go easy on me!
I have a bash script that will pipe an ls command with arbitrary filenames into sed, which will use an arbitrary replacement pattern on the files, and then this will be piped into awk for some processing. The catch is, awk needs to know both the original file name and the new one.
I've managed everything except getting the original file names into awk. For instance, let's say my files are test.* and my replacement pattern is 's:es:ar;', which would change every occurrence of "test" to "tart". For testing purposes I'm just using awk to print what it's receiving:
ls "$#" | sed "$pattern" | awk '{printf "0: %s\n1: %s\n2: %s\n", $0,$1,$2}'
where test.* is in $# and the pattern is stored in $pattern.
Clearly, this doesn't get me to where I want to be. The output is obviously
0: tart.c
1: tart.c
2:
If I could get sed to output "test.c tart.c", then I'd have two parameters for awk. I've played around with the pattern to no avail, even hardcoding "test.c" into the replacement. But of course that just gave me amateur results like "ttest.c art.c". Is it possible for sed to remember the input, then work it into the beginning of the output? Do I even have the right ideas? Thanks in advance!
Two ways to change the first t in a b in the duplicated field.
Duplicate (& replays the matched part), change first word and swap (remember 2 strings with a space in between):
echo test.c | sed -r 's/.*/& &/;s/t/b/;s/([^ ]*) (.*)/\2 \1/'
or with more magic (copy original value to buffer, make the change, insert value from buffer as the first line and replace eond of line with a space)
echo test.c | sed 'h;s/t/b/;x;G;s/\n/ /'
Use Perl instead of sed:
echo test.c | perl -lne 'print "$_ ", s/es/ar/r'
-l removes the newline from input and adds it after each print. The /r modifier to the substitution returns the modified string instead of changing the variable (Perl 5.14+ needed).
Old answer, not working for s/t/b/2 or s/.*/replaced/2:
You can duplicate the contents of the line with s/.*/& &/, then just tell sed that it should only apply the second substitution (this works at least in GNU sed):
echo test.c | sed 's/.*/& &/; s/es/ar/2'
$ echo 'foo' | awk '{old=$0; gsub(/o/,"e"); print old, $0}'
foo fee

Getting rid of some special symbol while reading from a file

I am writing a small script which is getting some configuration options from a settings file with a certain format (option=value or option=value1 value2 ...).
settings-file:
SomeOption=asdf
IFS=HDMI1 HDMI2 VGA1 DP1
SomeOtherOption=ghjk
Script:
for VALUE in $(cat settings | grep IFS | sed 's/.*=\(.*\)/\1/'); do
echo "$VALUE"x
done
Now I get the following output:
HDMI1x
HDMI2x
VGA1x
xP1
Expected output:
HDMI1x
HDMI2x
VGA1x
DP1x
I obviously can't use the data like this since the last read entry is mangled up somehow. What is going on and how do I stop this from happening?
Regards
Generally you can use awk like this:
awk -F'[= ]' '$1=="IFS"{for(i=2;i<=NF;i++)print $i"x"}' settings
-F'[= ] splits the line by = or space. The following awk program checks if the first field, the variable name equals IFS and then iterates trough column 2 to the end and prints them.
However, in comments you said that the file is using Windows line endings. In this case you need to pre-process the file before using awk. You can use tr to remove the carriage return symbols:
tr -d '\r' settings | awk -F'[= ]' '$1=="IFS"{for(i=2;i<=NF;i++)print $i"x"}'
The reason is likely that your settings file uses DOS line endings.
Once you've fixed that (with dos2unix for example), your loop can also be modified to the following, removing two utility invocations:
for value in $( sed -n -e 's/^IFS.*=\(.*\)/\1/p' settings ); do
echo "$value"x
done
Or you can do it all in one go, removing the need to modify the settings file at all:
tr -d '\r' <settings |
for value in $( sed -n -e 's/^IFS.*=\(.*\)/\1/p' ); do
echo "$value"x
done

Unix shell scripting, need assign the text files values to the sed command

i was trying to add the lines from the text file to the sed command
observered_list.txt
Uncaught SlingException
cannot render resource
IncludeTag Error
Recursive invocation
Reference component error
i need it to be coded like the following
sed '/Uncaught SlingException\|cannot render resource\|IncludeTag Error\|Recursive invocation\|Reference component error/ d'
help me to do this.
I would suggest you create a sed script and delete each pattern consecutively:
while read -r pattern; do
printf "/%s/ d;\n" "$pattern"
done < observered_list.txt >> remove_patterns.sed
# now invoke sed on the file you want to modify
sed -f remove_patterns.sed file_to_clean
Alternatively you could construct the sed command like this:
pattern=
while read -r line; do
pattern=$pattern'\|'$line
done < observered_list.txt
# strip of first and last \|
pattern=${pattern#\\\|}
pattern=${pattern%\\\|}
printf "sed '/%s/ d'\n" "$pattern"
# you still need to invoke the command, it's just printed
You can use grep for that:
grep -vFf /file/with/patterns.txt /file/to/process.txt
Explanation:
-v excludes lines of process.txt which match one of the patterns from output
-F treats patterns in patterns.txt as fixed strings instead of regexes (looks like this is desired here)
-f reads patterns from patterns.txt
Check man grep for further information.

Remove line from file in bash script using sed command [duplicate]

This question already has answers here:
Using variable inside of sed -i (regex?) bash
(2 answers)
Why wont sed remove line from file?
(4 answers)
Closed 8 years ago.
I am trying to remove lines from a text file from a Bash Script using command sed.
Here is how this function works.
User enters record number
Program Searches for record number
Program deletes record
Here is my code:
r=$(grep -h "$record" student_records.txt|cut -d"," -f1) #find the record that needs to be deleted
echo $line -->> This proves that previous command works
sed -i '/^$r/d' student_records.txt -->> this does not work
Any ideas?
To remove a line containing $record from the file:
grep -v "$record" student_records.txt >delete.me && mv -f delete.me student_records.txt
In the above, $record is treated as a regular expression. This means, for example, that a period is a wildcard. If that is unwanted, add the -F option to grep to specify that $record is to be treated as a fixed string.
Comments
Consider these two line:
r=$(grep -h "$record" student_records.txt|cut -d"," -f1) #find the record that needs to be deleted
echo $line -->> This proves that previous command works
The first line defines a shell variable r. The second line prints the shell variable line, a variable which was unaffected by the previous command. Consequently, the second line is not a successful test on the first.
sed -i '/^$r/d' student_records.txt -->> this does not work
Obseve that the expression $r appears inside single-quotes. The shell does not alter any inside single quotes. Consequently, $r will remain a dollar sign followed by an r. Since a dollar sign matches the end of a line, this expression will match nothing. The following would work better:
sed -i "/^$r/d" student_records.txt
Unlike the grep command, however, the above sed command is potentially dangerous. It would be easy to construct a value of r that would cause sed to do surprising things. So, don't use this approach unless you trust the process by which you obtained r.
What if more than one line matches record?
If there is more than one line that matches record, then the following would generate an unterminated address regex error from sed:
r=$(grep -h "$record" student_records.txt|cut -d"," -f1)
sed -i "/^$r/d" student_records.txt
This error is an example of the surprising results that can happen when a shell variable is expanded into a sed command.
By contrast, this approach would remove all matching lines:
grep -v "$record" student_records.txt >delete.me && mv -f delete.me student_records.txt

sed bash substitution only if variable has a value

I'm trying to find a way using variables and sed to do a specific text substitution using a changing input file, but only if there is a value given to replace the existing string with. No value= do nothing (rather than remove the existing string).
Example
Substitute.csv contains 5 lines=
this-has-text
this-has-text
this-has-text
this-has-text
and file.text has one sentence=
"When trying this I want to be sure that text-this-has is left alone."
If I run the following command in a shell script
Text='text-this-has'
Change=`sed -n '3p' substitute.csv`
grep -rl $Text /home/username/file.txt | xargs sed -i "s|$Text|$Change|"
I end up with
"When trying this I want to be sure that is left alone."
But I'd like it to remain as
"When trying this I want to be sure that text-this-has is left alone."
Any way to tell sed "If I give you nothing new, do nothing"?
I apologize for the overthinking, bad habit. Essentially what I'd like to accomplish is if line 3 of the csv file has a value - replace $Text with $Change inline. If the line is empty, leave $Text as $Text.
Text='text-this-has'
Change=$(sed -n '3p' substitute.csv)
if [[ -n $Change ]]; then
grep -rl $Text /home/username/file.txt | xargs sed -i "s|$Text|$Change|"
fi
Just keep it simple and use awk:
awk -v t="$Text" -v c="$Change" 'c!=""{sub(t,c)} {print}' file
If you need inplace editing just use GNU awk with -i inplace.
Given your clarified requirement, this is probably what you actually want:
awk -v t="$Text" 'NR==FNR{if (NR==3) c=$0; next} c!=""{sub(t,c)} {print}' Substitute.csv file.txt
Testing whether $Change has a value before launching into the grep and sed is undoubtedly the most efficient bash solution, although I'm a bit skeptical about the duplication of grep and sed; it saves a temporary file in the case of files which don't contain the target string, but at the cost of an extra scan up to the match in the case of files which do contain it.
If you're looking for typing efficiency, though, the following might be interesting:
find . -name '*.txt' -exec sed -i "s|$Text|${Change:-&}|" {} \;
Which will recursively find all files whose names end with the extension .txt and execute the sed command on each one. ${Change:-&} means "the value of $Change if it exists and is non-empty, and otherwise an &"; & in the replacement of a sed s command means "the matched text", so s|foo|&| replaces every occurrence of foo with itself. That's an expensive no-op but if your time matters more than your cpu time, it might have been worth it.

Resources