How to append the output of bashrc to txtfile - bash

In Linux terminal, What is the command string to append the output of bashrc to a text file (ex. mybash.txt) I know that with appending you use the double carrots '>>' but do not know how to append the output of bashrc to the text file.

You can use cat file >> outfile. If you only want to read the start of the file you can use:
head -N file >> outfile # where N is the numbers of lines you want to write
For the last part of a file you can use:
tail -N file >> outfile # where N is the numbers of lines you want to write

Related

bash one liner to remove duplicate path in line

I have a file with lot of a strings and one line starts with LIBXML2_INCLUDE
and the file is generated by another program to be specific by ./configure, this line wrongly gives two path and the first path is not correct and i need to remove it. This is how the line appears in file
LIBXML2_INCLUDE=-I/home/gan/Music/wvm/build/level/ast/deliveryx/libxml2//home/gan/Music/wvm/build/level/ast/deliveryx/libxml2/include/libxml2
i need to remove first /home/gan/Music/wvm/build/level/ast/deliveryx/libxml2/
and expected output is
LIBXML2_INCLUDE=-I/home/gan/Music/wvm/build/level/ast/deliveryx/libxml2/include/libxml2
How can i create a bash one liner to accomplish this?
Try like this:
# cat file
SOMEVAR=-I/some/path//some/path
# sed -i -e '/^SOMEVAR=/s,=-I.*//,=-I/,' file
# cat file
SOMEVAR=-I/some/path
#
To be a bit more fancy --
$ cat file
SOMEVAR=-I/some/path//some/path
$ sed -i -e '/^SOMEVAR=/s,=-I\(.*\)/\1$,=-I\1/,' file
$ cat file
SOMEVAR=-I/some/path/
$

Get first 5 lines of a file, using a file of program names as input (Unix)

Goal: using an input file with a list of file names, get the first 5 lines of each file and output to another file. Basically, I'm trying to find out what each program does by reading the header.
Shell: Ksh
Input: myfile.txt
tmp/file1.txt
tmp/file2.txt
Output:
tmp/file1.txt - "Creates web login screen"
tmp/file2.txt - "Updates user login"
I can use "head -5" but not sure how to get the input from the file. I'm assuming I could redirect (>> output.txt)the output for my output file.
Input file names use a relative path.
Update: I created a script below but I'm getting "syntax error: unexpected end of file". The script was created with VI.
#! /bin/sh
cat $HOME/jmarti20.list | while read line
do
#echo $line" >> jmarti20.txt
head -n 5 /los_prod/$line >> $HOME/jmarti20.txt
done
Right, you can append output with >> to a file.
head -n 5 file1.txt >> file_descriptions.txt
You can also use sed to print lines, from documentation at pinfo sed.
sed 5q file1.txt >> file_descriptions.txt
Personal preference is to put file description in line 3, and only print line 3 of files.
sed -n 3p file1.txt >> file_descriptions.txt
The reasoning for using line 3 has to do with the first line often containing a "shebang" like #!/bin/bash, and the 2nd line having localization strings, such as # -*- coding: UTF-8 -*-, to allow proper display of extra character glyphs and languages in terminals and text editors that support them.
Below is what I came up with and seems to work fairly well:
#! /bin/sh
cat $HOME/jmarti20.list | while read line
do
temp=$line
temp2=$(head -n 5 /los_prod/$line)
echo "$temp" "$temp2" >> jmarti20.txt
#echo "$line" >> jmarti20.txt
#head -n 5 /los_prod/$line >> $HOME/jmarti20.txt
done

Bash - read specific line from a file with all sorts of data and store as a variable

I have looked for an answer to what seems like a simple question, but I feel as though all these questions (below) only briefly touch on the matter and/or over-complicate the solution.
Read a file and split each line into two variables with bash program
Bash read from file and store to variables
Need to assign the contents of a text file to a variable in a bash script
What I want to do is read specific lines from a file (titled 'input'), store them variables and then use them.
For example, in this code, every 9th line after a certain point contains a filename that I want to store as a variable for later use. How can I do that?
steps=49
for((i=1;i<=${steps};i++)); do
...
g=$((9 * $i + 28)) #In.omega filename
`
For the bigger picture, I basically need to print a specific line (line 9) from the file whose name is specified in the gth line of the file named "input"
sed '1,39p;d' data > temp
sed "9,9p;d" [filename specified in line g of input] >> temp
sed '41,$p;d' data >> temp
mv temp data
Say you want to assign the 49th line of the $FILE file to the $ARG variable, you can do:
$ ARG=`cat $FILE | head -49 | tail -1`
To get line 9 of the file named in the gth line of the file named input:
sed -n 9p "$(sed -n ${g}p input)"
arg=$(cat sample.txt | sed -n '2p')
where arg is variable and sample.txt is file and 2 is line number

Extract line from a file in shell script

I have a text file of 5000000 lines and I want to extract one line from each 1000 and write them into a new text file. The new text file should be of 5000 line.
Can you help me?
I would use a python script to do so. However, the same logic can be used with your shell as well. Here is the python code.
input_file = 'path/file.txt'
output_file = 'path/output.txt'
n = 0
with open(input_file, 'r') as f:
with ope(output_file, 'w') as o:
for line in f:
n += 1
if n == 1000:
o.write(line)
n = 0
Basically, you initialise a counter then you iterate over the file line by line, you increment the counter for each line, if the counter hits 1000 you write the line in the new file and reset the counter back.
Here is how to iterate over lines of a file using Bash shell.
Try:
awk 'NR%1000==1' infile > outfile
see this link for more options: remove odd or even lines from text file in terminal in linux
You can use either head or tail, depends which line you'd like to extract.
To extract first line from each file (for instance *.txt files):
head -n1 *.txt | grep -ve ^= -e ^$ > first.txt
To extract the last line from each file, simply use tail instead of head.
For extracting specific line, see: How do I use Head and Tail to print specific lines of a file.

Extract line from text file based on leading characters of each line

I have a very large data dump that I need to manipulate. Basically, I receive a text file that has data from multiple tables in it. The first two characters of each line will tell me what table this is from. I need to read each of these lines and then extract them into a TEXT file... It would append each line to the text file. Each table should have it's own text file.
For example, lets say the data file looks like this...
HDxxxxxxxxxxxxx
HDyyyyyyyyyyyyy
ENxxxxxxxxxxxxx
ENyyyyyyyyyyyyy
HSyyyyyyyyyyyyy
What I would need is the first two lines to be in a text file named HD_out.txt, the 3rd and 4th lines in one named EN_out.txt, and the last one in a file named HS_out.txt.
Does anyone know how could this be done with either a simple batch file or UNIX shell script?
Use awk to split file based on first 2 characters:
gawk -v FIELDWIDTHS='2 99999' '{print $2 > $1"_out.txt"}' input.txt
Using bash:
while read -r line; do
echo "${line:2}" >> "${line:0:2}_out.txt"
done < inputFile
${var:startposition:length} is a bash string function to capture sub-strings. This would cause your inputfile to be split based on the first two chars. If you want to include the table prefix, just use echo "$line" >> "${line:0:2}_out.txt" instead of what is shown above.
Demo:
$ ls
file
$ cat file
HDxxxxxxxxxxxxx
HDyyyyyyyyyyyyy
ENxxxxxxxxxxxxx
ENyyyyyyyyyyyyy
HSyyyyyyyyyyyyy
$ while read -r line; do echo "${line:2}" >> "${line:0:2}_out.txt"; done < file
$ ls
EN_out.txt file HD_out.txt HS_out.txt
$ head *.txt
==> EN_out.txt <==
xxxxxxxxxxxxx
yyyyyyyyyyyyy
==> HD_out.txt <==
xxxxxxxxxxxxx
yyyyyyyyyyyyy
==> HS_out.txt <==
yyyyyyyyyyyyy

Resources