How to append line to empty file using sed, but not echo? - bash

I have the following problem: in a script, that must not be executed as root, I have to write a line to a newly created, empty file. This file is in /etc, so I need elevated privilages to write to it.
Creating the file is simple:
sudo touch /etc/myfile
Now, just using echo to write to the file like so doesn't work...
sudo echo "something" > /etc/myfile
... because only the first part of the command (echo) is executed with sudo, but not the redirection into the file.
Previously I used something like this...
sudo sed -i -e "\$aInsert this" /etc/differntfile
...to add to the end of the file, which worked, because the file wasn't empty. But since sed works line based, it doesn't do anything, the file stays completely empty.
Any suggestions? Is there a way to do this with echo somehow? Any special sed expression? Any other tools I could use?

You can use tee:
echo "something" | sudo tee /etc/myfile # tee -a to append
Or redirect to /dev/null if you don't want to see the output:
echo "something" | sudo tee /etc/myfile > /dev/null
Another option is to use sh -c to perform the full command under sudo:
sudo sh -c 'echo "something" > /etc/myfile'
Regarding doing this with sed: I don't think it is possible. Since sed is a stream editor, if there is no stream, there is nothing it can do with it.

If you're willing to upgrade from sed (which I also think cannot do this) to awk (GNU awk 4.1.0 or higher to be precise), it can be done like this:
sudo gawk -i inplace '{ print } ENDFILE { print "something" }' /etc/myfile
The alternatives given by the accepted answer are probably easier; I just needed something like this for a use case where I could just use a simple shell command (so no redirection), and the file had to be passed as a single standalone argument (so no sh -c which has the target file embedded inside its single argument). Unlike sed, awk also handles an incomplete last line (which is missing the trailing newline) without adding such.
This uses the inplace awk source library; see man 3am inplace for details.

Related

Using shell script to copy script from one file to another

Basically I want to copy several lines of code from a template file to a script file.
Is it even possible to use sed to copy a string full of symbols that interact with the script?
I used these lines:
$SWAP='sudo cat /home/kaarel/template'
sed -i -e "s/#pointer/${SWAP}/" "script.sh"
The output is:
./line-adder.sh: line 11: =sudo cat /home/kaarel/template: No such file or directory
No, it is not possible to do this robustly with sed. Just use awk:
awk -v swap="$SWAP" '{sub(/#pointer/,swap)}1' script.sh > tmp && mv tmp script.sh
With recent versions of GNU awk there's a -i inplace flag for inplace editing if that's something you care about.
Good point about "&&". Here's the REALLY robust version that will work for absolutely any character in the search or replacement strings:
awk -v old="#pointer" -v new="$SWAP" 's=index($0,old){$0 = substr($0,1,s-1) new substr($0,s+length(old))} 1'
e.g.:
$ echo "abc" | awk -v old="b" -v new="m&&n" 's=index($0,old){$0 = substr($0,1,s-1) new substr($0,s+length(old))} 1'
am&&nc
There are two issues with the line:
$SWAP='sudo cat /home/kaarel/template'
The first is that, before executing the line, bash performs variable expansion and replaces $SWAP with the current value of SWAP. That is not what you wanted. You wanted bash to assign a value to SWAP.
The second issue is that the right-hand side is enclosed in single-quotes which protect the string from expansion. You didn't want to protect the string from expansion: you wanted to execute it. To execute it, you can use back-quotes which may look similar but act very differently.
Back-quotes, however, are an ancient form of asking for command execution. The more modern form is $(...) which eliminates some problems that back-quotes had.
Putting it all together, use:
SWAP=$(sudo cat /home/kaarel/template)
sed -i -e "s/#pointer/${SWAP}/" "script.sh"
Be aware, though, that the sed command may have problems if there are any sed-active characters in the template file.

Bash: Output result of function into Sed parameters

sed -i '$a\curl -s http://whatismyip.org/' file
Trying to find a way to pull the WAN IP and insert it into the last line of a file as illustrated above (not working of course). This will be utilized via command line.
sed -i '$a\test' file
This will insert "test" after the last line in "file" as utlilized but how could I output the result of a function or command in it's place within Sed's syntax? Any suggests (awk, perl, bash script?) are welcome!
sed isn't required here. Just use this:
curl -s http://whatsmyip.org >> your.file
Note that bash supports the >> redirection operator which appends a program's output to a file
hek2mgl has shown you how to solve this specific problem. To address the more general question, you can do:
var=$(some command line)
This sets the shell variable $var to the output of the command. Then you can subsitute this into sed with:
sed -i "\$a\\$var" file

Write output to file with tabs/text added in ksh script

I am writing a KornShell (ksh) script that is logging to a file. I am redirecting the output of one of my commands (scp) to the same file, but I would like to add a tab at the start of those lines in the log file if possible.
Is this possible to do?
EDIT: Also I should mention that the text I am redirecting is coming from stderr. My line currently looks like this:
scp -q ${wks}:${file_location} ${save_directory} >> ${script_log} 2>&1
Note: the below doesn't work for ksh (see this question for possible solutions).
You probably can do something like
my_command | sed 's/^/\t/' >> my.log
The idea is to process the output of the command with a stream editor like sed in some manner. In this case, a tab will be added at the beginning of every line. Consider:
$ echo -e 'Test\nfoobar' | sed 's/^/\t/'
Test
foobar
I haven't tested this in ksh, but a quick web search suggests that it should work.
Also note that some commands can write to both stdout and stderr, don't forget to handle it.
Edit: in response to the comment and the edit in the question, the adjusted command can look like
scp -q ${wks}:${file_location} ${save_directory} 2>&1 | \
sed 's/^/\t/' >> ${script_log}
or, if you want to get rid of stdout completely,
scp -q ${wks}:${file_location} ${save_directory} 2>&1 >/dev/null | \
sed 's/^/\t/' >> ${script_log}
The technique is described in this answer.

Extracting all lines from a file that are not commented out in a shell script

I'm trying to extract lines from certain files that do not begin with # (commented out). How would I run through a file, ignore everything with a # in front of it, but copy each line that does not start with a # into a different file.
Thanks
Simpler: grep -v '^[[:space:]]*#' input.txt > output.txt
This assumes that you're using Unix/Linux shell and the available Unix toolkit of commands AND that you want to keep a copy of the original file.
cp file file.orig
mv file file.fix
sed '/^[ ]*#/d' file.fix > file
rm file.fix
Or if you've got a nice shiny new GNU sed that all be summarized as
cp file file.orig
sed -i '/^[ ]*#/d' file
In both cases, the regexp in the sed command is meant to be [spaceCharTabChar]
So you saying, delete any line that begins with an (optional space or tab chars) #, but print everything else.
I hope this helps.
grep -v ^\# file > newfile
grep -v ^\# file | grep -v ^$ > newfile
Not fancy regex, but I provide this method to Jr. Admins as it helps with understanding of pipes and redirection.

Removing text using grep

Trying to remove a line that contains a particular pattern in a text file. I have the following code which does not work
grep -v "$varName" config.txt
Can anyone tell me how I can make it work properly, I want to make it work using grep and not sed.
you can use sed, with in place -i
sed -i '/pattern/d' file
grep doesn't modify files. The best you can do if you insist on using grep and not sed is
grep -v "$varName" config.txt > $$ && mv $$ config.txt
Note that I'm using $$ as the temporary file name because it's the pid of your bash script, and therefore probably not a file name going to be used by some other bash script. I'd encourage using $$ in temp file names in bash, especially ones that might be run multiple times simultaneously.
try using -Ev
grep -Ev 'item0|item1|item2|item3'
That will delete lines containing item[0-3]. let me know if this helps

Resources