Add a new line of text at the top of a file in bash shell [duplicate] - bash

This question already has answers here:
Unix command to prepend text to a file
(21 answers)
Closed 4 years ago.
I want to write a bash script that takes my file:
READ_ME.MD
two
three
four
and makes it
READ_ME.MD
one
two
three
four
There are a bunch of similar StackOverflow questions, but I tried their answers and haven't been successful.
These are the bash scripts that I have tried and failed with:
test.sh
sed '1s/^/one/' READ_ME.md > READ_ME.md
Result: Clears the contents of my file
test.sh
sed '1,1s/^/insert this /' READ_ME.md > READ_ME.md
Result: Clears the contents of my file
test.sh
sed -i '1s/^/one\n/' READ_ME.md
Result: sed: 1: "READ_ME.md": invalid command code R
Any help would be appreciated.

You can use this BSD sed command:
sed -i '' '1i\
one
' file
-i will save changes inline to file.
If you want to add a line at the top if same line is not already there then use BSD sed command:
line='one'
sed -i '' '1{/'"$line"'/!i\
'"$line"'
}' file

Your last example works for me with GNU sed. Based on the error message you added, I'd guess you're working on a Mac system? According to this blog post, a suffix argument may be required on Mac versions of sed:
sed -i ' ' '1s/^one\n/' READ_ME.md

If this is bash or zsh, you can use process substitution like so.
% cat x
one
two
three
% cat <(echo "zero") x
zero
one
two
three
Redirect this into a temp file and copy it back to the original

there is always ed
printf '%s\n' H 1i "one" . w | ed -s READ_ME.MD

Related

Weird behavior when concatenate string in bash shell [duplicate]

This question already has answers here:
Bash script prints "Command Not Found" on empty lines
(17 answers)
Closed 6 years ago.
I have a file store version information and I wrote a shell to read two fields and combine them. But when I concatenate those two fields, it show me a werid result.
version file:
buildVer = 3
version = 1.0.0
script looks like:
#!bin/bash
verFile='version'
sdk_ver=`cat $verFile | sed -nE 's/version = (.*)/\1/p'`
build_ver=`cat $verFile | sed -nE 's/buildVer = (.*)/\1/p'`
echo $sdk_ver
echo $build_ver
tagname="$sdk_ver.$build_ver"
echo $tagname
The output shows
1.0.0
3
.30.0
I tried to echo the sdk_ver directly without read it from file, this piece of script works well. So I think it may relate to the sed, but I couldn't figure out how to fix it.
Does anyone know why it acts like that?
You're getting this problem because of presence of DOS line ending i.e. \r in each line of version file.
Use dos2unix or this sed command to remove \r first:
sed -i 's/\r//' version
btw you can also simplify your script using pure BASH constructs like this:
#!/bin/bash
while IFS='= ' read -r k v; do
declare $k="$v"
done < <(sed $'s/\r//' version)
tagname="$version.$buildVer"
echo "$tagname"
This will give output:
1.0.0.3
Alternate solution, with awk:
awk '/version/{v=$3} /buildVer/{b=$3} END{print v "." b}' version.txt
Example:
$ cat file.txt
buildVer = 3
version = 1.0.0
$ awk '/version/{v=$3} /buildVer/{b=$3} END{print v "." b}' file.txt
1.0.0.3

Shell command to delete \n on 1 out of 2 line on a file [duplicate]

This question already has answers here:
How to merge every two lines into one from the command line?
(21 answers)
Closed 6 years ago.
I'm looking for a shell command to delete the return chariot on one line out of two.
I have a file like this :
1.32640997
;;P
1.14517534
;;P
1.16120958
;;P
...
And I would like something like this:
1.32640997;;P
1.14517534;;P
1.16120958;;P
...
Is it possible?
Thanks
Using GNU paste
paste -d '' - - < file
Using BSD paste
paste -d '\0' - - < file
paste produces two columns from stdin with - - as parameters, 3 columns with - - - as parameters, and so on.
-d is to specify a column separator, use '\0' for no separator.
Using Perl
perl -ne 'chomp($prev = $_); print $prev, scalar <>' < file
Using awk
$ awk '{printf "%s%s",$0,(NR%2==0?ORS:"")}' File
1.32640997;;P
1.14517534;;P
1.16120958;;P
This prints each line followed by nothing for odd lines or followed by the output record separator for even lines.
Using sed
This works by reading in lines in pairs:
$ sed 'N; s/\n//' File
1.32640997;;P
1.14517534;;P
1.16120958;;P
N reads in the next line and s/\n// removes the newline.
Using xargs:
xargs -n 2 -d '\n' printf '%s%s\n' <file

Removing lines from multiple files with sed command

So, disclaimer: I am pretty new to using bash and zsh, so there is a chance the answer is really simple. Nonetheless. I checked previous postings and couldn't find anything. (edit: I have tried this in both bash and zsh shells- same problem.)
I have a directory with many files and am trying to remove the first line from each file.
So say the directory contains: file1.txt file2.txt file3.txt ... etc.
I am using the sed command (non-GNU):
sed -i -e "1d" *.txt
For some reason, this is only removing the first line of the first file. I thought that the *.txt would affect all files matching the pattern in directory. Strangely, it is creating the file duplicates with -e appended, but both the duplicate and original are the same.
I tried this with other commands (e.g. ls *.txt) and it works fine. Is there something about sed I am missing?
Thank you in advance.
Different versions of sed in differing operating systems support various parameters.
OpenBSD (5.4) sed
The -i flag is unavailable. You can use the following /bin/sh syntax:
for i in *.txt
do
f=`mktemp -p .`
sed -e "1d" "${i}" > "${f}" && mv -- "${f}" "${i}"
done
FreeBSD (11-CURRENT) sed
The -i flag requires an extension, even if it's empty. Thus must be written as sed -i "" -e "1d" *.txt
GNU sed
This looks to see if the argument following -i is another option (or possibly a command). If so, it assumes an in-place modification. If it appears to be a file extension such as ".bak", it will rename the original with the ".bak" and then modify it into the original file's name.
There might be other variations on other platforms, but those are the three I have at hand.
use it without -e !
for one file use:
sed -i '1d' filename
for all files use :
sed -i '1d' *.txt
or
files=/path/to/files/*.extension ; for var in $files ; do sed -i '1d' $var ; done
.for me i use ubuntu and debian based systems , this method is working for me 100% , but for other platformes i'm not sure , so this is other method :
replace first line with emty pattern , and remove empty lines , (double commands):
for files in $(ls /path/to/files/*.txt); do sed -i "s/$(head -1 "$files")//g" "$files" ; sed -i '/^$/d' "$files" ; done
Note: if your files contain splash '/' , then it will give error , so in this case sed command should look like this ( sed -i "s[$(head -1 "$files")[[g" )
hope that's what you're looking for :)
The issue here is that the line number isn't reset when sed opens a new file, so 1 only matches the first line of the first file.
One solution is to use a shell loop, calling sed once for each file. Gumnos' answer shows how to do this in the most widely compatible way, although if you have a version of sed supporting the -i flag, you could do this instead:
for i in *.txt; do
sed -i.bak '1d' "$i"
done
It is possible to avoid creating the backup file by passing an empty suffix but personally, I don't think it's such a bad thing. One day you'll be grateful for it!
It appears that you're not working with GNU tools but if you were, I would recommend using GNU awk for this task. The variable FNR is useful here, as it keeps track of the record number for each file individually, allowing you to do this:
gawk -i inplace 'FNR>1' *.txt
Using the inplace extension, this allows you to remove the first line from each of your files, by only printing the lines where FNR is greater than 1.
Testing it out:
$ seq 5 > file1
$ seq 5 > file2
$ gawk -i inplace 'FNR>1' file1 file2
$ cat file1
2
3
4
5
$ cat file2
2
3
4
5
The last argument you are passing to the Sed is the problem
try something like this.
var=(`find *txt`)
for file in "${var[#]}"
do
sed -i -e 1d $file
done
This did the trick for me.

Bash: Output result of function into Sed parameters

sed -i '$a\curl -s http://whatismyip.org/' file
Trying to find a way to pull the WAN IP and insert it into the last line of a file as illustrated above (not working of course). This will be utilized via command line.
sed -i '$a\test' file
This will insert "test" after the last line in "file" as utlilized but how could I output the result of a function or command in it's place within Sed's syntax? Any suggests (awk, perl, bash script?) are welcome!
sed isn't required here. Just use this:
curl -s http://whatsmyip.org >> your.file
Note that bash supports the >> redirection operator which appends a program's output to a file
hek2mgl has shown you how to solve this specific problem. To address the more general question, you can do:
var=$(some command line)
This sets the shell variable $var to the output of the command. Then you can subsitute this into sed with:
sed -i "\$a\\$var" file

sed: Argument list too long

I have created a script in Unix environment. In the script, I used the sed command as shown below to delete some lines from the file. I want to delete a specified set of lines, not necessarily a simple range, from the file, specified by line numbers.
sed -i "101d; 102d; ... 4930d;" <file_name>
When I execute this it shows the following error:
sed: Arg is too long
Can you please help to resolve this problem?
If you want to delete a contiguous range of lines, you can specify a range of line numbers:
sed -i '101,4930d' file
If you want to delete some arbitrary set of lines that can't easily be expressed as a range, you can put the commands in a file rather than on the command line, and use sed -f.
For example, if foo.sed contains:
2d
4d
6d
8d
10d
then this:
sed -i -f foo.sed file
will delete lines 2, 4, 6, 8, and 10 from file. Putting the commands in a file rather than on the command line avoids limits on command line length.
If there's some pattern to the lines you want to delete, you might consider using a more sophisticated tool such as Awk or Perl.
I had this exact same problem.
I originally put the giant sed command sed -i "101d; 102d; ... 4930d;" <file_name> in a file and tried to execute as a bash script.
To fix - put only the deletion commands in a file and run that file as a sed script. I was able to execute 18,193 deletion commands that had failed to run before.
sed -i -f to_delete.sed input_file
to_delete.sed:
101d;102d;...4930d
With awk:
awk ' NR < 101 || NR > 4930 { print } ' input_file
This might work for you (GNU sed and awk):
cat <<\! >/tmp/a
> 2
> 4
> 6
> 8
> !
seq 10 >/tmp/b
sed 's/$/d/' /tmp/a | sed -f - /tmp/b
1
3
5
7
9
10
awk 'NR==FNR{a[$0];next};FNR in a{next};1' /tmp/{a,b}
1
3
5
7
9
10

Resources