Let's say I have the following csv file:
A,1
A,2
B,3
C,4
C,5
And for each unique value i in the first column of the file I want to write a script that does some processing using this value. I go about doing it this way:
CSVFILE=path/to/csv
VALUES=$(cut -d, -f1 $CSVFILE | sort | uniq)
for i in $VALUES;
do
cat >> file_${i}.sh <<-!
#!/bin/bash
#
# script that takes value I
#
echo "Processing" $i
!
done
However, this creates empty files for all values of i it is looping over, and prints the actual content of files to the console.
Is there a way to redirect the output to the files instead?
Simply
#!/bin/bash
FILE=/path/to/file
values=`cat $FILE | awk -F, '{print $1}' | sort | uniq | tr '\n' ' '`
for i in $values; do
echo "value of i is $i" >> file_$i.sh
done
Screenshot
Try using this:
#!/usr/bin/env bash
csv=/path/to/file
while IFS= read -r i; do
cat >> "file_$i.sh" <<-eof
#!/bin/bash
#
# Script that takes value $i ...
#
eof
done < <(cut -d, -f1 "$csv" | sort -u)
Related
I have txt file separated by comma:
2012,wp_fronins.pdf
2013,test789.pdf
2014,ok09report.pdf
I'm trying to extract from the file each value and pass him to CURL command with a condition before.
For example:
if $value1=2012 do
curl "https://onlinesap.org/reports/$valu1/$value2
Any idea ?
Another way to achieve is to read the file directly and cut the rows to get the elements directly.
while read p; do
value1=`echo $p | cut -d',' -f1`
value2=`echo $p | cut -d',' -f2`
if [ $value1 = "2012" ]; then
curl "https://onlinesap.org/reports/$value1/$value2"
fi
# Add More conditional statements here for other value1
done < filename.txt
Since the name of the pdf file (value2) is unique, you may try something like this:
#!/bin/bash
FILENAME=myFile.txt
cat $FILENAME | awk -F',' '{print $2}' | while read value2; do
value1=`grep -w "$value2" $FILENAME | awk -F',' '{print $1}'` # watch the back-tick
if [ $value1 = "2012" ]; then
curl https://onlinesap.org/reports/$value1/$value2
fi
done
Please notice that the whole file is scanned a second time for each line found.
In other words, its complexity is O(n^2)
I need to read a json file and take value like 99XXXXXXXXXXXX0 and cccs and write in csv which having column BASE_No and Schedule.
Input file: classedFFDCD_5666_4888_45_2018_02112018012106.021.json
"bfgft":"99XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"cccs"
"bfgft":"21XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"nncs"
"bfgft":"56XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"fgbs"
"bfgft":"44XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"ddss"
"bfgft":"94XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"jjjs"
Expected output:
BASE_No,Schedule
99XXXXXXXXXXXX0,cccs
21XXXXXXXXXXXX0,nncs
56XXXXXXXXXXXX0,fgbs
44XXXXXXXXXXXX0,ddss
94XXXXXXXXXXXX0,jjjs
I am using below code for reading file name and date, but unable to read file for BASE_No,Schedule.
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for line in `ls -lrt *.json`; do
date=$(echo $line |awk -F ' ' '{print $6" "$7}');
file=$(echo $line |awk -F ' ' '{print $9}');
echo ''$file','$(date "+%Y/%m/%d %H.%M.%S")'' >> $File_Tracker`
Assuming the structure of the json doesnt change for every line, the sample code checks through line by line to retrieve the particular value and concatenates using printf. The output is then stored as new output.txt file.
#!/bin/bash
input="/home/kj4458/winhome/Downloads/sample.json"
printf "Base,Schedule \n" > output.txt
while IFS= read -r var
do
printf "`echo "$var" | cut -d':' -f 2 | cut -d',' -f 1`,`echo "$var" | cut -d':' -f 4 | cut -d',' -f 2` \n" | sed 's/"//g' >> output.txt
done < "$input"
awk -F " \" " ' {print $4","$12 }' file
99XXXXXXXXXXXX0,cccs
21XXXXXXXXXXXX0,nncs
56XXXXXXXXXXXX0,fgbs
44XXXXXXXXXXXX0,ddss
94XXXXXXXXXXXX0,jjjs
I got that result!
I'm trying to compare two CSV files by reading the first line-by-line and grepping the second file for a match. Using Diff is not a viable solution. I seem to be having a problem with having the email address stored as a variable when I grep the second file.
#!/bin/bash
LANG=C
head -2 $1 | tail -1 | while read -r line; do
line=$( echo $line | sed 's/\n//g' )
echo $line
cat $2 | cut -d',' -f1 | grep -iF "$line"
done
Variable $line contains an email address that DOES exist in file $2, but I'm not getting any results.
What am I doing wrong?
File1
Email
email#verizon.net
email#gmail.com
email#yahoo.com
File2
email,,,,
email#verizon.net,,,,
email#gmail.com,,,,
email#yahoo.com,,,,
Given:
# csv_0.csv
email
me#me.com
you#me.com
fee#me.com
and
# csv_1.csv
email,foo,bar,baz,bim
bee#me.com,3,2,3,4
me#me.com,4,1,1,32
you#me.com,7,4,6,6
gee#me.com,1,2,2,6
me#me.com,5,7,2,34
you#me.com,22,3,2,33
I ran
$ pattern=$(head -2 csv_0.csv | tail -1 | sed s/,.*//g)
$ grep $pattern csv_1.csv
me#me.com,4,1,1,32
me#me.com,5,7,2,34
To do this for each line in csv_0.csv
#!/bin/bash
LANG=C
filename="$1"
{
read # don't read csv headers
while read line
do
pattern=$(echo $line | sed s/,.*//g)
grep $pattern $2
done
} <"$filename"
Then
$ ./csv_read.sh csv_2.csv csv_3.csv
me#me.com,4,1,1,32
me#me.com,5,7,2,34
you#me.com,7,4,6,6
you#me.com,22,3,2,33
I have written a script finding the hash value from a dictionary and outputting it in the form "word:md5sum" for each word. I then have a file of names which I would like to use to place each name followed by every hash value i.e.
tom:word1hash
tom:word2hash
.
.
bob:word1hash
and so on. Everything works fine but I can not figure out the substitution. Here is my script.
$#!/bin/bash
#/etc/dictionaries-common/words
cat words.txt | while read line; do echo -n "$line:" >> dbHashFile.txt
echo "$line" | md5sum | sed 's/[ ]-//g' >> dbHashFile.txt; done
cat users.txt | while read name
do
cat dbHashFile.txt >> nameHash.txt;
awk '{$1="$name"}' nameHash.txt;
cat nameHash.txt >> dbHash.txt;
done
the line
$awk '{$1="$name"}' nameHash.txt;
is where I attempt to do the substitution.
thank you for your help
Try replacing the entire contents of the last loop (both cats and the awk) with:
awk -v name="$name" -F ':' '{ print name ":" $2 }' dbHashFile.txt >>dbHash.txt
I'm trying to write a little script which will open a text file and give me an md5 hash for each line of text. For example I have a file with:
123
213
312
I want output to be:
ba1f2511fc30423bdbb183fe33f3dd0f
6f36dfd82a1b64f668d9957ad81199ff
390d29f732f024a4ebd58645781dfa5a
I'm trying to do this part in bash which will read each line:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line
done
later on I do:
$ more 123.txt | ./read.line.by.line.sh | md5sum | cut -d ' ' -f 1
but I'm missing something here, does not work :(
Maybe there is an easier way...
Almost there, try this:
while read -r line; do printf %s "$line" | md5sum | cut -f1 -d' '; done < 123.txt
Unless you also want to hash the newline character in every line you should use printf or echo -n instead of echo option.
In a script:
#! /bin/bash
cat "$#" | while read -r line; do
printf %s "$line" | md5sum | cut -f1 -d' '
done
The script can be called with multiple files as parameters.
You can just call md5sum directly in the script:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line | md5sum | awk '{print $1}'
done
That way the script spits out directly what you want: the md5 hash of each line.
this worked for me..
cat $file | while read line; do printf %s "$line" | tr -d '\r\n' | md5 >> hashes.csv; done