Shell script to edit a large text file - shell

I need to append the server name to the front of each user entry associated with that server. Below is an example of the file I'm curently dealing with:
<server_name>
at:x:25:25:Batch jobs daemon:/var/spool/atjobs:/bin/bash
bin:x:1:1:bin:/bin:/bin/bash
<server_name>
at:x:25:25:Batch jobs daemon:/var/spool/atjobs:/bin/bash
bin:x:1:1:bin:/bin:/bin/bash
I need a script to strip out the server name and append it to the front of the user entry. So the file would end up looking like this:
<server_name_a>
<server_name_a>:at:x:25:25:Batch jobs daemon:/var/spool/atjobs:/bin/bash
<server_name_a>:bin:x:1:1:bin:/bin:/bin/bash
<server_name_b>
<server_name_b>:at:x:25:25:Batch jobs daemon:/var/spool/atjobs:/bin/bash
<server_name_b>:bin:x:1:1:bin:/bin:/bin/bash

Here's a simple awk program that should do what you're describing:
awk '/^INFO/{prefix=$NF;next}{printf "%s:%s\n", prefix, $0}' < ${FILENAME} > ${OUTPUT}

Try something like this:
#!/bin/bash
cat $* | while read line
do
if echo $line | grep "^INFO:" > /dev/null
then
SERVER=`echo "$line" | sed -e 's/.*<\(.*\)>.*/\1/g'`
else
echo $SERVER:$line
fi
done

Related

Why is this bash loop failing to concatenate the files?

I am at my wits end as to why this loop is failing to concatenate the files the way I need it. Basically, lets say we have following files:
AB124661.lane3.R1.fastq.gz
AB124661.lane4.R1.fastq.gz
AB124661.lane3.R2.fastq.gz
AB124661.lane4.R2.fastq.gz
What we want is:
cat AB124661.lane3.R1.fastq.gz AB124661.lane4.R1.fastq.gz > AB124661.R1.fastq.gz
cat AB124661.lane3.R2.fastq.gz AB124661.lane4.R2.fastq.gz > AB124661.R2.fastq.gz
What I tried (and didn't work):
Create and save file names (AB124661) to a ID file:
ls -1 R1.gz | awk -F '.' '{print $1}' | sort | uniq > ID
This creates an ID file that stores the samples/files name.
Run the following loop:
for i in `cat ./ID`; do cat $i\.lane3.R1.fastq.gz $i\.lane4.R1.fastq.gz \> out/$i\.R1.fastq.gz; done
for i in `cat ./ID`; do cat $i\.lane3.R2.fastq.gz $i\.lane4.R2.fastq.gz \> out/$i\.R2.fastq.gz; done
The loop fails and concatenates into empty files.
Things I tried:
Yes, the ID file is definitely in the folder
When I run with echo it shows the cat command correct
Any help will be very much appreciated,
Best,
AC
why are you escaping the \> ? That's going to result in a cat: '>': No such file or directory instead of a redirection.
Don't read lines with for
while IFS= read -r id; do
cat "${id}.lane3.R1.fastq.gz" "${id}.lane4.R1.fastq.gz" > "out/${id}.R1.fastq.gz"
cat "${id}.lane3.R2.fastq.gz" "${id}.lane4.R2.fastq.gz" > "out/${id}.R2.fastq.gz"
done < ./ID
Let say you have id stored in file ./ID per line
while read -r line; do
cat "$line".lane3.R1.fastq.gz "$line".lane4.R1.fastq.gz > "$line".R1.fastq.gz
cat "$line".lane3.R2.fastq.gz "$line".lane4.R2.fastq.gz > "$line".R2.fastq.gz
done < ./ID
A pure shell solution could be like that:
for file in *.fastq.gz; do
id=${file%%.*}
[ -e "$id".R1.fastq.gz ] || cat "$id".*.R1.fastq.gz > "$id".R1.fastq.gz
[ -e "$id".R2.fastq.gz ] || cat "$id".*.R2.fastq.gz > "$id".R2.fastq.gz
done
Alternatively:
printf '%s\n' *.fastq.gz | cut -d. -f1 | sort -u |
while IFS= read -r id; do
cat "$id".*.R1.fastq.gz > "$id".R1.fastq.gz
cat "$id".*.R2.fastq.gz > "$id".R2.fastq.gz
done
This solution assumes filenames of interest don't contain newline characters.

Extract a line from a text file using grep?

I have a textfile called log.txt, and it logs the file name and the path it was gotten from. so something like this
2.txt
/home/test/etc/2.txt
basically the file name and its previous location. I want to use grep to grab the file directory save it as a variable and move the file back to its original location.
for var in "$#"
do
if grep "$var" log.txt
then
# code if found
else
# code if not found
fi
this just prints out to the console the 2.txt and its directory since the directory has 2.txt in it.
thanks.
Maybe flip the logic to make it more efficient?
f=''
while read prev
do case "$prev" in
*/*) f="${prev##*/}"; continue;; # remember the name
*) [[ -e "$f" ]] && mv "$f" "$prev";;
done < log.txt
That walks through all the files in the log and if they exist locally, move them back. Should be functionally the same without a grep per file.
If the name is always the same then why save it in the log at all?
If it is, then
while read prev
do f="${prev##*/}" # strip the path info
[[ -e "$f" ]] && mv "$f" "$prev"
done < <( grep / log.txt )
Having the file names on the same line would significantly simplify your script. But maybe try something like
# Convert from command-line arguments to lines
printf '%s\n' "$#" |
# Pair up with entries in file
awk 'NR==FNR { f[$0]; next }
FNR%2 { if ($0 in f) p=$0; else p=""; next }
p { print "mv \"" p "\" \"" $0 "\"" }' - log.txt |
sh
Test it by replacing sh with cat and see what you get. If it looks correct, switch back.
Briefly, something similar could perhaps be pulled off with printf '%s\n' "$#" | grep -A 1 -Fxf - log.txt but you end up having to parse the output to pair up the output lines anyway.
Another solution:
for f in `grep -v "/" log.txt`; do
grep "/$f" log.txt | xargs -I{} cp $f {}
done
grep -q (for "quiet") stops the output

Read multiple variables from file

I need to read a file that has lines like
user=username1
pass=password1
How can I read multiple lines like this into separate variables like username and password?
Would I use awk or grep? I have found ways to read lines into variables with grep but would I need to read the file for each individual item?
The end result is to use these variables to access a database via the command line. So I need to be able to read, store and use these values in other commands.
if the process which generates the file is safe and has shell syntax just source the file.
. ./file
Otherwise the file can be processes before to add quotes
perl -ne 'if (/^([A-Za-z_]\w*)=(.*)/) {$k=$1;$v=$2;$v=~s/\x27/\x27\\\x27\x27/g;print "$k=\x27$v\x27\n";}' <file >file2
. ./file2
If you want to use awk then
Input
$ cat file
user=username1
pass=password1
Reading
$ user=$(awk -F= '$1=="user"{print $2;exit}' file)
$ pass=$(awk -F= '$1=="pass"{print $2;exit}' file)
Output
$ echo $user
username1
$ echo $pass
password1
You could use a loop for your file perhaps, but this is probably the functionality you're looking for.
$ echo 'user=username1' | awk -F= '{print $2}'
username1
Using the -F flag sets the delimiter to = and we select the 2nd item from the row.
file.txt:
user=username1
pass=password1
user=username2
pass=password2
user=username3
pass=password3
Do to avoid browsing several times the file file.txt:
#!/usr/bin/env bash
func () {
echo "user:$1 pass:$2"
}
i=0
while IFS='' read -r line; do
if [ $i -eq 0 ]; then
i=1
user=$(echo ${line} | cut -f2 -d'=')
else
i=0
pass=$(echo ${line} | cut -f2 -d'=')
func "$user" "$pass"
fi
done < file.txt
Output:
user:username1 pass:password1
user:username2 pass:password2
user:username3 pass:password3

How to remove a filename from the list of path in Shell

I would like to remove a file name only from the following configuration file.
Configuration File -- test.conf
knowledgebase/arun/test.rf
knowledgebase/arunraj/tester/test.drl
knowledgebase/arunraj2/arun/test/tester.drl
The above file should be read. And removed contents should went to another file called output.txt
Following are my try. It is not working to me at all. I am getting empty files only.
#!/bin/bash
file=test.conf
while IFS= read -r line
do
# grep --exclude=*.drl line
# awk 'BEGIN {getline line ; gsub("*.drl","", line) ; print line}'
# awk '{ gsub("/",".drl",$NF); print line }' arun.conf
# awk 'NF{NF--};1' line arun.conf
echo $line | rev | cut -d'/' -f 1 | rev >> output.txt
done < "$file"
Expected Output :
knowledgebase/arun
knowledgebase/arunraj/tester
knowledgebase/arunraj2/arun/test
There's the dirname command to make it easy and reliable:
#!/bin/bash
file=test.conf
while IFS= read -r line
do
dirname "$line"
done < "$file" > output.txt
There are Bash shell parameter expansions that will work OK with the list of names given but won't work reliably for some names:
file=test.conf
while IFS= read -r line
do
echo "${line%/*}"
done < "$file" > output.txt
There's sed to do the job — easily with the given set of names:
sed 's%/[^/]*$%%' test.conf > output.txt
It's harder if you have to deal with names like /plain.file (or plain.file — the same sorts of edge cases that trip up the shell expansion).
You could add Perl, Python, Awk variants to the list of ways of doing the job.
You can get the path like this:
path=${fullpath%/*}
It cuts away the string after the last /
Using awk one liner you can do this:
awk 'BEGIN{FS=OFS="/"} {NF--} 1' test.conf
Output:
knowledgebase/arun
knowledgebase/arunraj/tester
knowledgebase/arunraj2/arun/test

Pipe input into a script

I have written a shell script in ksh to convert a CSV file into Spreadsheet XML file. It takes an existing CSV file (the path to which is a variable in the script), and then creates a new output file .xls. The script has no positional parameters. The file name of the CSV is currently hardcoded into the script.
I would like to amend the script so it can take the input CSV data from a pipe, and so that the .xls output data can also be piped or redirected (>) to a file on the command line.
How is this achieved?
I am struggling to find documentation on how to write a shell script to take input from a pipe. It appears that 'read' is only used for std input from kb.
Thanks.
Edit : script below for info (now amended to take input from a pipe via the cat, as per the answer to the question.
#!/bin/ksh
#Script to convert a .csv data to "Spreadsheet ML" XML format - the XML scheme for Excel 2003
#
# Take CSV data as standard input
# Out XLS data as standard output
#
DATE=`date +%Y%m%d`
#define tmp files
INPUT=tmp.csv
IN_FILE=in_file.csv
#take standard input and save as $INPUT (tmp.csv)
cat > $INPUT
#clean input data and save as $IN_FILE (in_file.csv)
grep '.' $INPUT | sed 's/ *,/,/g' | sed 's/, */,/g' > $IN_FILE
#delete original $INPUT file (tmp.csv)
rm $INPUT
#detect the number of columns and rows in the input file
ROWS=`wc -l < $IN_FILE | sed 's/ //g' `
COLS=`awk -F',' '{print NF; exit}' $IN_FILE`
#echo "Total columns is $COLS"
#echo "Total rows is $ROWS"
#create start of Excel File
echo "<?xml version=\"1.0\"?>
<?mso-application progid=\"Excel.Sheet\"?>
<Workbook xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\"
xmlns:o=\"urn:schemas-microsoft-com:office:office\"
xmlns:x=\"urn:schemas-microsoft-com:office:excel\"
xmlns:ss=\"urn:schemas-microsoft-com:office:spreadsheet\"
xmlns:html=\"http://www.w3.org/TR/REC-html40\">
<DocumentProperties xmlns=\"urn:schemas-microsoft-com:office:office\">
<Author>Ben Hamilton</Author>
<LastAuthor>Ben Hamilton</LastAuthor>
<Created>${DATE}</Created>
<Company>MCC</Company>
<Version>10.2625</Version>
</DocumentProperties>
<ExcelWorkbook xmlns=\"urn:schemas-microsoft-com:office:excel\">
<WindowHeight>6135</WindowHeight>
<WindowWidth>8445</WindowWidth>
<WindowTopX>240</WindowTopX>
<WindowTopY>120</WindowTopY>
<ProtectStructure>False</ProtectStructure>
<ProtectWindows>False</ProtectWindows>
</ExcelWorkbook>
<Styles>
<Style ss:ID=\"Default\" ss:Name=\"Normal\">
<Alignment ss:Vertical=\"Bottom\" />
<Borders />
<Font />
<Interior />
<NumberFormat />
<Protection />
</Style>
<Style ss:ID=\"AcadDate\">
<NumberFormat ss:Format=\"Short Date\"/>
</Style>
</Styles>
<Worksheet ss:Name=\"Sheet 1\">
<Table>
<Column ss:AutoFitWidth=\"1\" />"
#for each row in turn, create the XML elements for row/column
r=1
while (( r <= $ROWS ))
do
echo "<Row>\n"
c=1
while (( c <= $COLS ))
do
DATA=`sed -n "${r}p" $IN_FILE | cut -d "," -f $c `
if [[ "${DATA}" == [0-9][0-9]\.[0-9][0-9]\.[0-9][0-9][0-9][0-9] ]]; then
DD=`echo $DATA | cut -d "." -f 1`
MM=`echo $DATA | cut -d "." -f 2`
YYYY=`echo $DATA | cut -d "." -f 3`
echo "<Cell ss:StyleID=\"AcadDate\"><Data ss:Type=\"DateTime\">${YYYY}-${MM}-${DD}T00:00:00.000</Data></Cell>"
else
echo "<Cell><Data ss:Type=\"String\">${DATA}</Data></Cell>"
fi
(( c+=1 ))
done
echo "</Row>"
(( r+=1 ))
done
echo "</Table>\n</Worksheet>\n</Workbook>"
rm $IN_FILE > /dev/null
exit 0
Commands inherit their standard input from the process that starts them. In your case, your script provides its standard input for each command that it runs. A simple example script:
#!/bin/bash
cat > foo.txt
Piping data into your shell script causes cat to read that data, since cat inherits its standard input from your script.
$ echo "Hello world" | myscript.sh
$ cat foo.txt
Hello world
The read command is provided by the shell for reading text from standard input into a shell variable if you don't have another command to read or process your script's standard input.
#!/bin/bash
read foo
echo "You entered '$foo'"
$ echo bob | myscript.sh
You entered 'bob'
There is one problem here. If you run the script without first checking to ensure there is input on stdin, then it will hang till something is typed.
So, to get around this, you can check to ensure there is stdin first, and if not, then use a command line argument instead if given.
Create a script called "testPipe.sh"
#!/bin/bash
# Check to see if a pipe exists on stdin.
if [ -p /dev/stdin ]; then
echo "Data was piped to this script!"
# If we want to read the input line by line
while IFS= read line; do
echo "Line: ${line}"
done
# Or if we want to simply grab all the data, we can simply use cat instead
# cat
else
echo "No input was found on stdin, skipping!"
# Checking to ensure a filename was specified and that it exists
if [ -f "$1" ]; then
echo "Filename specified: ${1}"
echo "Doing things now.."
else
echo "No input given!"
fi
fi
Then to test:
Let's add some stuff to a test.txt file and then pipe the output to our script.
printf "stuff\nmore stuff\n" > test.txt
cat test.txt | ./testPipe.sh
Output:
Data was piped to this script!
Line: stuff
Line: more stuff
Now let's test if not providing any input:
./testPipe.sh
Output:
No input was found on stdin, skipping!
No input given!
Now let's test if providing a valid filename:
./testPipe.sh test.txt
Output:
No input was found on stdin, skipping!
Filename specified: test.txt
Doing things now..
And finally, let's test using an invalid filename:
./testPipe.sh invalidFile.txt
Output:
No input was found on stdin, skipping!
No input given!
Explanation:
Programs like read and cat will use the stdin if it is available within the shell, otherwise they will wait for input.
Credit goes to Mike from this page in his answer showing how to check for stdin input: https://unix.stackexchange.com/questions/33049/check-if-pipe-is-empty-and-run-a-command-on-the-data-if-it-isnt?newreg=fb5b291531dd4100837b12bc1836456f
If the external program (that you are scripting) already takes input from stdin, your script does not need to do anything. For example, awk reads from stdin, so a short script to count words per line:
#!/bin/sh
awk '{print NF}'
Then
./myscript.sh <<END
one
one two
one two three
END
outputs
1
2
3

Resources