This gets all the headers in a file
$head -n 1 basicFile.csv | tr ',' '\n'
header1
header2
header3
header4
header5
header6
header7
header8
header9
header10
what I want is to add the header number to the left
to get something like:
1:header1
...
10:header10
How do I do this?
head -n 1 basicFile.csv | tr ',' '\n' | cat -n
Not exactly the output you specified, but pretty close.
There might be a shorter way of doing it with awk, but this works:
oldIFS=$IFS
IFS=','
i=1
for header in $(head -n 1 basicFile.cs); do
echo ${i}:$header
((i++))
done
IFS=$oldIFS
You can just use a simple counter and for loop:
COUNTER=1
for h in $(head -n 1 basicFile.csv | tr ',' '\n')
do
printf "%d:%s\n" "$COUNTER" "$h"
(( COUNTER++ ))
done
It depends, just the shell or is awk good enough?
% cat count_headers
cnt=1 ; head -n 1 "$1" | tr ',' '\n' | while read header ; do
printf "%d:%s\n" $cnt "$header"
cnt=$(($cnt+1))
done
% sh count_headers basicFile.csv
1:...
...
% awk -F, 'NR==1 {for(i=1;i<=NF;i++) print i ":" $i}' basicFile.csv
1:...
...
%
Related
I have a text file which contains the following lines:
"user","password_last_changed","expires_in"
"jeffrey","2021-09-21 12:54:26","90 days"
"root","2021-09-21 11:06:57","0 days"
How can I grab two fields jeffrey and 90 days from inverted commas and save in a variable.
If awk is an option, you could save an array and then save the elements as individual variables.
$ IFS="\"" read -ra var <<< $(awk -F, '/jeffrey/{ print $1, $NF }' input_file)
$ $ var2="${var[3]}"
$ echo "$var2"
90 days
$ var1="${var[1]}"
$ echo "$var1"
jeffrey
while read -r line; do # read in line by line
name=$(echo $line | awk -F, ' { print $1} ' | sed 's/"//g') # grap first col and strip "
expire=$(echo $line | awk -F, ' { print $3} '| sed 's/"//g') # grap third col and strip "
echo "$name" "$expire" # do your business
done < yourfile.txt
IFS=","
arr=( $(cat txt | head -2 | tail -1 | cut -d, -f 1,3 | tr -d '"') )
echo "${arr[0]}"
echo "${arr[1]}"
The result is into an array, you can access to the elements by index.
May be this below method will help you using
sed and awk command
#!/bin/sh
username=$(sed -n '/jeffrey/p' demo.txt | awk -F',' '{print $1}')
echo "$username"
expires_in=$(sed -n '/jeffrey/p' demo.txt | awk -F',' '{print $3}')
echo "$expires_in"
Output :
jeffrey
90 days
Note :
This above method will work if their is only distinct username
As far i know username are not duplicate
I need to read a json file and take value like 99XXXXXXXXXXXX0 and cccs and write in csv which having column BASE_No and Schedule.
Input file: classedFFDCD_5666_4888_45_2018_02112018012106.021.json
"bfgft":"99XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"cccs"
"bfgft":"21XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"nncs"
"bfgft":"56XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"fgbs"
"bfgft":"44XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"ddss"
"bfgft":"94XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"jjjs"
Expected output:
BASE_No,Schedule
99XXXXXXXXXXXX0,cccs
21XXXXXXXXXXXX0,nncs
56XXXXXXXXXXXX0,fgbs
44XXXXXXXXXXXX0,ddss
94XXXXXXXXXXXX0,jjjs
I am using below code for reading file name and date, but unable to read file for BASE_No,Schedule.
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for line in `ls -lrt *.json`; do
date=$(echo $line |awk -F ' ' '{print $6" "$7}');
file=$(echo $line |awk -F ' ' '{print $9}');
echo ''$file','$(date "+%Y/%m/%d %H.%M.%S")'' >> $File_Tracker`
Assuming the structure of the json doesnt change for every line, the sample code checks through line by line to retrieve the particular value and concatenates using printf. The output is then stored as new output.txt file.
#!/bin/bash
input="/home/kj4458/winhome/Downloads/sample.json"
printf "Base,Schedule \n" > output.txt
while IFS= read -r var
do
printf "`echo "$var" | cut -d':' -f 2 | cut -d',' -f 1`,`echo "$var" | cut -d':' -f 4 | cut -d',' -f 2` \n" | sed 's/"//g' >> output.txt
done < "$input"
awk -F " \" " ' {print $4","$12 }' file
99XXXXXXXXXXXX0,cccs
21XXXXXXXXXXXX0,nncs
56XXXXXXXXXXXX0,fgbs
44XXXXXXXXXXXX0,ddss
94XXXXXXXXXXXX0,jjjs
I got that result!
I am trying to limit the number of lines found during a while read line loop. For example:
File: order.csv
123456,ORDER1,NEW
123456,ORDER-2,NEW
123456,ORDER-3,SHIPPED
I am doing the following.
cat order.csv | while read line;
do
order=$(echo $line | cut -d "," -f 1)
status=$(echo $line | cut -d "," -f 3)
echo "$order:$status"
done
Which outputs:
123456:NEW
123456:NEW
123456:SHIPPED
How can I limit the number of lines. In this case there are three. How can I limit them to only 2 so that only the first two are displayed?
Desired output:
123456:NEW
123456:NEW
There are some ways to meet your requirements:
Method 1
Use head to display first few lines of a file.
head -n 2 order.csv | while read line;
do
order=$(echo $line | cut -d "," -f 1)
status=$(echo $line | cut -d "," -f 3)
echo "$order:$status"
done
Method 2
Use a for loop.
for i in {1..2}
do
read line
order=$(echo $line | cut -d "," -f 1)
status=$(echo $line | cut -d "," -f 3)
echo "$order:$status"
done < order.csv
Method 3
Use awk.
awk -F, 'NR <= 2 { print $1":"$3 }' order.csv
I have this shell script variable, var. It keeps 3 entries separated by new line. From this variable var, I want to extract 2, and 0.078688. Just these two numbers.
var="USER_ID=2
# 0.078688
Suhas"
These are the code I tried:
echo "$var" | grep -o -P '(?<=\=).*(?=\n)' # For extracting 2
echo "$var" | awk -v FS="(# |\n)" '{print $2}' # For extracting 0.078688
None of the above working. What is the problem here? How to fix this ?
Just use tr alone for retaining the numerical digits, the dot (.) and the white-space and remove everything else.
tr -cd '0-9. ' <<<"$var"
2 0.078688
From the man page, of tr for usage of -c, -d flags,
tr [OPTION]... SET1 [SET2]
-c, -C, --complement
use the complement of SET1
-d, --delete
delete characters in SET1, do not translate
To store it in variables,
IFS=' ' read -r var1 var2 < <(tr -cd '0-9. ' <<<"$var")
printf "%s\n" "$var1"
2
printf "%s\n" "$var2"
2
0.078688
Or in an array as
IFS=' ' read -ra numArray < <(tr -cd '0-9. ' <<<"$var")
printf "%s\n" "${numArray[#]}"
2
0.078688
Note:- The -cd flags in tr are POSIX compliant and will work on any systems that has tr installed.
echo "$var" |grep -oP 'USER_ID=\K.*'
2
echo "$var" |grep -oP '# \K.*'
0.078688
Your solution is near to perfect, you need to chance \n to $ which represent end of line.
echo "$var" |awk -F'# ' '/#/{print $2}'
0.078688
echo "$var" |awk -F'=' '/USER_ID/{print $2}'
2
You can do it with pure bash using a regex:
#!/bin/bash
var="USER_ID=2
# 0.078688
Suhas"
[[ ${var} =~ =([0-9]+).*#[[:space:]]([0-9\.]+) ]] && result1="${BASH_REMATCH[1]}" && result2="${BASH_REMATCH[2]}"
echo "${result1}"
echo "${result2}"
With awk:
First value:
echo "$var" | grep 'USER_ID' | awk -F "=" '{print $2}'
Second value:
echo "$var" | grep '#' | awk '{print $2}'
Assuming this is the format of data as your sample
# For extracting 2
echo "$var" | sed -e '/.*=/!d' -e 's///'
echo "$var" | awk -F '=' 'NR==1{ print $2}'
# For extracting 0.078688
echo "$var" | sed -e '/.*#[[:blank:]]*/!d' -e 's///'
echo "$var" | awk -F '#' 'NR==2{ print $2}'
Is there any short and easy way to convert multiple lines of script in to a single line to be parsed in a eval command?
ie
getent group | cut -f3 -d":" | sort -n | uniq -c |\
while read x ; do
[ -z "${x}" ] && break
set - $x ; if [ $1 -gt 1 ]; then
grps=`getent group | nawk -F: '($3 == n) { print $1 }' n=$2 | xargs` ; echo "Duplicate GID ($2): ${grps}" ; fi done
one_line=`cat your_script_file | sed ":a s/[\]$//; N; s/[\]$//; s/\n/ /; t a ;"`
echo $one_line