#!/bin/bash
if [ $# -ne 1 ]
then
echo "USAGE:vitest filename"
else
FILENAME=$1
exec vi $FILENAME <<EOF
i
Line 1.
Line 2.
^[
ZZ
EOF
fi
exit 0
I'm trying to input the Line 1. and Line 2. with Exec vi using the here doc, and commands.
When running the script it gives me the following:
Vim(?):Warning: Input is not from a terminal
Vim: Error reading input, exiting...
Press ENTER or type command to continueVim: Finished.
Vim: Error reading input, exiting...
Vim: Finished.
You want to start vi in ex mode, with a few minor changes to the script.
vi -e "$FILENAME" <<EOF
i
Line 1.
Line 2.
.
wq
EOF
exec is almost certainly unnecessary, especially since you have an exit command following vi. exec is used to replace the current script with the given command; it is not needed simply to execute a command.
A brief history of UNIX text editors:
ed was the original editor, designed to work with a teletype rather than a video terminal.
ex was an extended version of ed, designed to take advantage of a video terminal.
vi was an editor that provided ex with a full-screen visual mode, in contrast with the line-oriented interface employed by ed and ex.
As suggested, ed
ed file << END
1i
line1
line2
.
wq
END
The "dot" line means "end of input".
It can be written less legibly as a one-liner
printf "%s\n" 1i "line1" "line2" . wq | ed file
Use cat.
$ cat file1.txt file2.txt | tee file3.txt
Line 1
Line 2
aaaa
bbbb
cccc
Using sed
If I understand correctly, you want to add two lines to the beginning of a file. In that case, as per Cyrus' suggestion, run:
#!/bin/bash
if [ $# -ne 1 ]
then
echo "USAGE:vitest filename"
exit 1
fi
sed -i.bak '1 s/^/line1\nline2\n/' "$1"
Notes:
When a shell variable is used, it should be in double-quotes unless you want word splitting and pathname expansion to be performed. This is important for file names, for example, as it is now common for them to contain whitespace.
It is best practice to use lower or mixed case names for shell variables. The system uses upper case names for its variables and you don't want to overwrite one of them accidentally.
In the check for the argument, the if statement should include an exit to prevent the rest of the script from being run in the case that no argument was provided. In the above, we added exit 1 which sets the exit code to 1 to signal an error.
Using vi
Let's start with this test file:
$ cat File
some line
Now, let's run vi and see what is in File afterward:
$ vi -s <(echo $'iline1\nline2\n\eZZ') File
$ cat File
line1
line2
some line
The above requires bash or similar.
Related
The description of STDIN of sh in SUSv4 2016 edition says
It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell
I did an experiment to see it in action, by redirecting the following script file into sh -s:
#!/bin/sh
./rewind # a c program that reads up its stdin, prints it to stdout, before SEEK_SET its standard input to 0.
echo 1234-1234
And it keeps printing out "echo 1234-1234". (Had shell consumed whole block of file, it would only print out "1234-1234")
So obviously, shells (at least my ones) do read at line boundaries.
But however, when I examined the FreeBSD ash "input.c" source codes, it reads in BUFSIZ-byte blocks, and I don't understand how it preserves line boundaries.
What I want to know is: How does shells preserve line boundaries when their source codes apparently shows that they read in blocks?
Standard input isn't seekable in some cases, for example if it is redirected from a pipe or from a terminal. E.g. having a file called rew with a content:
#!/bin/bash
echo 123
perl -E 'seek(STDIN,0,0) or die "$!"' #the rewind
and using it as
bash -s < rew
prints
123
123
...
123
^C
so, when the STDIN is seekable it will work as expected, but trying the same from a pipe, such:
cat rew | bash -s #the cat is intentional here :)
will print
123
Illegal seek at -e line 1.
So, your c-program rewind should print an error, when it is trying to seek in un-seekable input.
You can demonstrate this (surprising) behaviour with the following script:
$ cat test.sh
cut -f 1
printf '%s\t%s\n' script file
If you pass it to standard input of your shell, line 2 onward should become the standard input of the cut command:
$ sh < test.sh
printf '%s\t%s\n' script file
If you instead run it normally and input literally foo, followed by Tab, bar, Enter, and Ctrl-d, you should instead see standard input being linked as normal:
$ sh test.sh
foo bar
foo
script file
Annotated:
$ sh test.sh # Your command
foo bar # Your input to `cut`
foo # The output of `cut`
script file # The output of `printf`
When I compiled the FreeBSD ash with NO_HISTORY macro defined to 1 in "shell.h", it consumes the whole file and outputs only 1234-1234 on my rewind testing program. Apparently, the FreeBSD ash relies on libedit to preserve IO line boundaries.
So I was just starting learning bash scripting. I encountered a question in a book.
An example testfile contains following content.
$ cat testfile
This is the first line.
This is the second line.
This is the third line.
And the script file is like:
#!/bin/bash
# testing input/output file descriptor
exec 3<> testfile
read line <&3
echo "Read: $line"
echo "This is a test line" >&3
After running the script, the testfile became:
$ cat testfile
This is the first line.
This is a test line
ine.
This is the third line.
I understand why that script changes the testfile. My question is why
"ine." starts from a new line? Does echo command automatically add a newline character to the end of the string?
echo -n is what you seek: the option -n
instructs echo to "do not output the trailing newline".
FWIW: man echo on your platform will instruct what options the /bin/echo command understands. But since you mention bash as shell: bash has an internal implementation of echo (a so-called "builtin")
How can I write data to a text file automatically by shell scripting in Linux?
I was able to open the file. However, I don't know how to write data to it.
The short answer:
echo "some data for the file" >> fileName
However, echo doesn't deal with end of line characters (EOFs) in an ideal way. So, if you're going to append more than one line, do it with printf:
printf "some data for the file\nAnd a new line" >> fileName
The >> and > operators are very useful for redirecting output of commands, they work with multiple other bash commands.
#!/bin/sh
FILE="/path/to/file"
/bin/cat <<EOM >$FILE
text1
text2 # This comment will be inside of the file.
The keyword EOM can be any text, but it must start the line and be alone.
EOM # This will be also inside of the file, see the space in front of EOM.
EOM # No comments and spaces around here, or it will not work.
text4
EOM
You can redirect the output of a command to a file:
$ cat file > copy_file
or append to it
$ cat file >> copy_file
If you want to write directly the command is echo 'text'
$ echo 'Hello World' > file
#!/bin/bash
cat > FILE.txt <<EOF
info code info
info code info
info code info
EOF
I know this is a damn old question, but as the OP is about scripting, and for the fact that google brought me here, opening file descriptors for reading and writing at the same time should also be mentioned.
#!/bin/bash
# Open file descriptor (fd) 3 for read/write on a text file.
exec 3<> poem.txt
# Let's print some text to fd 3
echo "Roses are red" >&3
echo "Violets are blue" >&3
echo "Poems are cute" >&3
echo "And so are you" >&3
# Close fd 3
exec 3>&-
Then cat the file on terminal
$ cat poem.txt
Roses are red
Violets are blue
Poems are cute
And so are you
This example causes file poem.txt to be open for reading and writing on file descriptor 3. It also shows that *nix boxes know more fd's then just stdin, stdout and stderr (fd 0,1,2). It actually holds a lot. Usually the max number of file descriptors the kernel can allocate can be found in /proc/sys/file-max or /proc/sys/fs/file-max but using any fd above 9 is dangerous as it could conflict with fd's used by the shell internally. So don't bother and only use fd's 0-9. If you need more the 9 file descriptors in a bash script you should use a different language anyways :)
Anyhow, fd's can be used in a lot of interesting ways.
I like this answer:
cat > FILE.txt <<EOF
info code info
...
EOF
but would suggest cat >> FILE.txt << EOF if you want just add something to the end of the file without wiping out what is already exists
Like this:
cat >> FILE.txt <<EOF
info code info
...
EOF
Moving my comment as an answer, as requested by #lycono
If you need to do this with root privileges, do it this way:
sudo sh -c 'echo "some data for the file" >> fileName'
For environments where here documents are unavailable (Makefile, Dockerfile, etc) you can often use printf for a reasonably legible and efficient solution.
printf '%s\n' '#!/bin/sh' '# Second line' \
'# Third line' \
'# Conveniently mix single and double quotes, too' \
"# Generated $(date)" \
'# ^ the date command executes when the file is generated' \
'for file in *; do' \
' echo "Found $file"' \
'done' >outputfile
I thought there were a few perfectly fine answers, but no concise summary of all possibilities; thus:
The core principal behind most answers here is redirection. Two are important redirection operators for writing to files:
Redirecting Output:
echo 'text to completely overwrite contents of myfile' > myfile
Appending Redirected Output
echo 'text to add to end of myfile' >> myfile
Here Documents
Others mentioned, rather than from a fixed input source like echo 'text', you could also interactively write to files via a "Here Document", which are also detailed in the link to the bash manual above. Those answers, e.g.
cat > FILE.txt <<EOF` or `cat >> FILE.txt <<EOF
make use of the same redirection operators, but add another layer via "Here Documents". In the above syntax, you write to the FILE.txt via the output of cat. The writing only takes place after the interactive input is given some specific string, in this case 'EOF', but this could be any string, e.g.:
cat > FILE.txt <<'StopEverything'` or `cat >> FILE.txt <<'StopEverything'
would work just as well. Here Documents also look for various delimiters and other interesting parsing characters, so have a look at the docs for further info on that.
Here Strings
A bit convoluted, and more of an exercise in understanding both redirection and Here Documents syntax, but you could combine Here Document style syntax with standard redirect operators to become a Here String:
Redirecting Output of cat Input
cat > myfile <<<'text to completely overwrite contents of myfile'
Appending Redirected Output of cat Input
cat >> myfile <<<'text to completely overwrite contents of myfile'
This approach works and is the best
cat > (filename) <<EOF
Text1...
Text2...
EOF
Basically the text will search for keyword "EOF" till it terminates writing/appending the file
If you are using variables, you can use
first_var="Hello"
second_var="How are you"
If you want to concat both string and write it to file, then use below
echo "${first_var} - ${second_var}" > ./file_name.txt
Your file_name.txt content will be "Hello - How are you"
Can also use here document and vi, the below script generates a FILE.txt with 3 lines and variable interpolation
VAR=Test
vi FILE.txt <<EOFXX
i
#This is my var in text file
var = $VAR
#Thats end of text file
^[
ZZ
EOFXX
Then file will have 3 lines as below. "i" is to start vi insert mode and similarly to close the file with Esc and ZZ.
#This is my var in text file
var = Test
#Thats end of text file
I'm currently using the following to capture everything that goes to the terminal and throw it into a log file
exec 4<&1 5<&2 1>&2>&>(tee -a $LOG_FILE)
however, I don't want color escape codes/clutter going into the log file. so i have something like this that sorta works
exec 4<&1 5<&2 1>&2>&>(
while read -u 0; do
#to terminal
echo "$REPLY"
#to log file (color removed)
echo "$REPLY" | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' >> $LOG_FILE
done
unset REPLY #tidy
)
except read waits for carriage return which isn't ideal for some portions of the script (e.g. echo -n "..." or printf without \n).
Follow-up to Jonathan Leffler's answer:
Given the example script test.sh:
#!/bin/bash
LOG_FILE="./test.log"
echo -n >$LOG_FILE
exec 4<&1 5<&2 1>&2>&>(tee -a >(sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' > $LOG_FILE))
##### ##### #####
# Main
echo "starting execution"
printf "\n\n"
echo "color test:"
echo -e "\033[0;31mhello \033[0;32mworld\033[0m!"
printf "\n\n"
echo -e "\033[0;36mEnvironment:\033[0m\n foo: cat\n bar: dog\n your wife: hot\n fix: A/C"
echo -n "Before we get started. Is the above information correct? "
read YES
echo -e "\n[READ] $YES" >> $LOG_FILE
YES=$(echo "$YES" | sed 's/^\s*//;s/\s*$//')
test ! "$(echo "$YES" | grep -iE '^y(es)?$')" && echo -e "\nExiting... :(" && exit
printf "\n\n"
#...some hundreds of lines of code later...
echo "Done!"
##### ##### #####
# End
exec 1<&4 4>&- 2<&5 5>&-
echo "Log File: $LOG_FILE"
The output to the terminal is as expected and there is no color escape codes/clutter in the log file as desired. However upon examining test.log, I do not see the [READ] ... (see line 21 of test.sh).
The log file [of my actual bash script] contains the line Log File: ... at the end of it even after closing the 4 and 5 fds. I was able to resolve the issue by putting a sleep 1 before the second exec - I assume there's a race condition or fd shenanigans to blame for it. Unfortunately for you guys, I am not able to reproduce this issue with test.sh but I'd be interested in any speculation anyone may have.
Consider using the pee program discussed in Is it possible to distribute stdin over parallel processes. It would allow you to send the log data through your sed script, while continuing to send the colours to the actual output.
One major advantage of this is that it would remove the 'execute sed once per line of log output'; that is really diabolical for performance (in terms of number of processes executed, if nothing else).
I know it's not a perfect solution, but cat -v will make non visible chars like \x1B to be converted into visible form like ^[[1;34m. The output will be messy, but it will be ascii text at least.
I use to do stuff like this by setting TERM=dumb before running my command. That pretty much removed any control characters except for tab, CR, and LF. I have no idea if this works for your situation, but it's worth a try. The problem is that you won't see color encodings on your terminal either since it's a dumb terminal.
You can also try either vis or cat (especially the -v parameter) and see if these do something for you. You'd simply put them in your pipeline like this:
exec 4<&1 5<&2 1>&2>&>(tee -a | cat -v | $LOG_FILE)
By the way, almost all terminal programs have an option to capture the input, and most clean it up for you. What platform are you on, and what type of terminal program are you using?
You could attempt to use the -n option for read. It reads in n characters instead of waiting for a new line. You could set it to one. This would increase the number of iteration the code runs, but it would not wait for newlines.
From the man:
-n NCHARS read returns after reading NCHARS characters rather than waiting for a complete line of input.
Note: I have not tested this
You can use ANSIFilter to strip or transform console output with ANSI escape sequences.
See http://www.andre-simon.de/zip/download.html#ansifilter
Might not screen -L or the script commands be viable options instead of this exec loop?
I have a file with 2 lines and I want to read them into 2 variables respectively. How do I accomplish this in shellscript(bash)?
You can open file descriptors in a shell to read the variables:
#!/bin/bash
# open file
exec 6<tst.txt
read foo <&6
read bar <&6
# close file again
exec 6<&-
echo $foo $bar
EDIT:
As a quick explanation, this is using IO redirection. Normally the file descriptors are handled as follows:
0 stdin (input)
1 stdout (output)
2 stderr (error)
However, there's nothing preventing from using other file descriptors (up to 9), so we're opening the "tst.txt" file in file descriptor 6, and read from it using IO redirection.
So, exec 6<tst.txt opens file descriptor 6 and redirects tst.txt into it, whereas exec 6<&- closes it again.
I'm unfortunately not on linux right now to test, but this would be close.
#!/bin/bash
file="/path/to/file"
# Store the previous IFS so we don't break anything else in the script.
prevIFS='$IFS'
# You need the line break to capture a newline.
IFS='
'
read var1 var2 < $file
echo "Var1: $var1"
echo "Var2: $var2"
# Set IFS back to normal
IFS='$prevIFS'
The simplest answer would be using sed command. Assuming that your file name is file.txt
var1=($(sed '1q;d' file.txt))
var2=($(sed '2q;d' file.txt))
Where 1q and 2q defines the line number.
All the values in Line 1 will be assigned to var1 and similarly to var2.
try this
#!/bin/bash
I=0
while read; do
VAR[$I]=$REPLY
((I++))
done < file
echo ${VAR[0]}
echo ${VAR[1]}
this will work with a file with more than 2 lines
Can you reconfigure the input file (with variables) to work as shell code? i.e.
$ cat varFile
var1=xyz
var2=abc
$ cat myShellScript.sh
#/bin/whatever (bash)?
# source the variable file
. /path/to/varFile
echo $var1
echo $var2
This is a standard concept in shell scripting and makes it much easier to manage configuration issues where you need to control your (unix/linux) environment based on which physical hardware you are running your system. If this is part of you concern, please let me know and I'll update the sample code to extend on this technique.
I hope this helps.