Bash Script printing different output of same command running on "cmd line window" and "reading from csv file" - bash

I have a shell script which reads input command(with arguments) from a csv file and execute.
The command is /path/ABCD_CALL GI 30-JUN-2010 '' 98994-01
here '' is single quote without space,
In csv file I am using /opt/isis/infosys/src/gq19iobl/IOBL_CALL GI 30-JUN-2010 \'' \'' 98994-01
to escape single qoutes
Below is the shell script
IFS=","
cat modules.csv | while read line;
do
d="${line}"
eval "$d"
done
This command shows hundreds of records as an output on console window.
The issue I am facing is, the same command when I type manually and run from command window, I am able to see all the output records; but the same command when I am running from csv using shell script mentioned above I am getting only 1 record which shows error array.
I applied debugging using
set -x
trap read debug
There I can see below output
+ cat modules.csv
+ read line
' d='/path/ABCD_CALL GI 30-JUN-2010 '\'''\'' '\'''\'' 98994-01
' eval '/path/ABCD_CALL GI 30-JUN-2010 '\'''\'' '\'''\'' 98994-01
++ /path/ABCD_CALL GI 30-JUN-2010 '' '' $'98994-01\r'
------------- ABCD RESULT SUMMARY -------------
ABCD return message : MESSAGE ARRAY must be checked for ERRORS and WARNINGS. and ABCD returned a 1
Total balances : 0.
Total errors : 1.
error_array[0]
and so on with other details of error.
What should I do to see the same output when reading the same data from csv?

Related

Extracting file content using a for loop [duplicate]

I'm working on a long Bash script. I want to read cells from a CSV file into Bash variables. I can parse lines and the first column, but not any other column. Here's my code so far:
cat myfile.csv|while read line
do
read -d, col1 col2 < <(echo $line)
echo "I got:$col1|$col2"
done
It's only printing the first column. As an additional test, I tried the following:
read -d, x y < <(echo a,b,)
And $y is empty. So I tried:
read x y < <(echo a b)
And $y is b. Why?
You need to use IFS instead of -d:
while IFS=, read -r col1 col2
do
echo "I got:$col1|$col2"
done < myfile.csv
To skip a given number of header lines:
skip_headers=3
while IFS=, read -r col1 col2
do
if ((skip_headers))
then
((skip_headers--))
else
echo "I got:$col1|$col2"
fi
done < myfile.csv
Note that for general purpose CSV parsing you should use a specialized tool which can handle quoted fields with internal commas, among other issues that Bash can't handle by itself. Examples of such tools are cvstool and csvkit.
How to parse a CSV file in Bash?
Coming late to this question and as bash do offer new features, because this question stand about bash and because none of already posted answer show this powerful and compliant way of doing precisely this.
Parsing CSV files under bash, using loadable module
Conforming to RFC 4180, a string like this sample CSV row:
12,22.45,"Hello, ""man"".","A, b.",42
should be splitted as
1 12
2 22.45
3 Hello, "man".
4 A, b.
5 42
bash loadable .C compiled modules.
Under bash, you could create, edit, and use loadable c compiled modules. Once loaded, they work like any other builtin!! ( You may find more information at source tree. ;)
Current source tree (Oct 15 2021, bash V5.1-rc3) do contain a bunch of samples:
accept listen for and accept a remote network connection on a given port
asort Sort arrays in-place
basename Return non-directory portion of pathname.
cat cat(1) replacement with no options - the way cat was intended.
csv process one line of csv data and populate an indexed array.
dirname Return directory portion of pathname.
fdflags Change the flag associated with one of bash's open file descriptors.
finfo Print file info.
head Copy first part of files.
hello Obligatory "Hello World" / sample loadable.
...
tee Duplicate standard input.
template Example template for loadable builtin.
truefalse True and false builtins.
tty Return terminal name.
uname Print system information.
unlink Remove a directory entry.
whoami Print out username of current user.
There is an full working cvs parser ready to use in examples/loadables directory: csv.c!!
Under Debian GNU/Linux based system, you may have to install bash-builtins package by
apt install bash-builtins
Using loadable bash-builtins:
Then:
enable -f /usr/lib/bash/csv csv
From there, you could use csv as a bash builtin.
With my sample: 12,22.45,"Hello, ""man"".","A, b.",42
csv -a myArray '12,22.45,"Hello, ""man"".","A, b.",42'
printf "%s\n" "${myArray[#]}" | cat -n
1 12
2 22.45
3 Hello, "man".
4 A, b.
5 42
Then in a loop, processing a file.
while IFS= read -r line;do
csv -a aVar "$line"
printf "First two columns are: [ '%s' - '%s' ]\n" "${aVar[0]}" "${aVar[1]}"
done <myfile.csv
This way is clearly the quickest and strongest than using any other combination of bash builtins or fork to any binary.
Unfortunely, depending on your system implementation, if your version of bash was compiled without loadable, this may not work...
Complete sample with multiline CSV fields.
Conforming to RFC 4180, a string like this single CSV row:
12,22.45,"Hello ""man"",
This is a good day, today!","A, b.",42
should be splitted as
1 12
2 22.45
3 Hello "man",
This is a good day, today!
4 A, b.
5 42
Full sample script for parsing CSV containing multilines fields
Here is a small sample file with 1 headline, 4 columns and 3 rows. Because two fields do contain newline, the file are 6 lines length.
Id,Name,Desc,Value
1234,Cpt1023,"Energy counter",34213
2343,Sns2123,"Temperatur sensor
to trigg for alarm",48.4
42,Eye1412,"Solar sensor ""Day /
Night""",12199.21
And a small script able to parse this file correctly:
#!/bin/bash
enable -f /usr/lib/bash/csv csv
file="sample.csv"
exec {FD}<"$file"
read -ru $FD line
csv -a headline "$line"
printf -v fieldfmt '%-8s: "%%q"\\n' "${headline[#]}"
numcols=${#headline[#]}
while read -ru $FD line;do
while csv -a row "$line" ; (( ${#row[#]} < numcols )) ;do
read -ru $FD sline || break
line+=$'\n'"$sline"
done
printf "$fieldfmt\\n" "${row[#]}"
done
This may render: (I've used printf "%q" to represent non-printables characters like newlines as $'\n')
Id : "1234"
Name : "Cpt1023"
Desc : "Energy\ counter"
Value : "34213"
Id : "2343"
Name : "Sns2123"
Desc : "$'Temperatur sensor\nto trigg for alarm'"
Value : "48.4"
Id : "42"
Name : "Eye1412"
Desc : "$'Solar sensor "Day /\nNight"'"
Value : "12199.21"
You could find a full working sample there: csvsample.sh.txt or
csvsample.sh.
Note:
In this sample, I use head line to determine row width (number of columns). If you're head line could hold newlines, (or if your CSV use more than 1 head line). You will have to pass number or columns as argument to your script (and the number of head lines).
Warning:
Of course, parsing CSV using this is not perfect! This work for many simple CSV files, but care about encoding and security!! For sample, this module won't be able to handle binary fields!
Read carefully csv.c source code comments and RFC 4180!
From the man page:
-d delim
The first character of delim is used to terminate the input line,
rather than newline.
You are using -d, which will terminate the input line on the comma. It will not read the rest of the line. That's why $y is empty.
We can parse csv files with quoted strings and delimited by say | with following code
while read -r line
do
field1=$(echo "$line" | awk -F'|' '{printf "%s", $1}' | tr -d '"')
field2=$(echo "$line" | awk -F'|' '{printf "%s", $2}' | tr -d '"')
echo "$field1 $field2"
done < "$csvFile"
awk parses the string fields to variables and tr removes the quote.
Slightly slower as awk is executed for each field.
In addition to the answer from #Dennis Williamson, it may be helpful to skip the first line when it contains the header of the CSV:
{
read
while IFS=, read -r col1 col2
do
echo "I got:$col1|$col2"
done
} < myfile.csv
If you want to read CSV file with some lines, so this the solution.
while IFS=, read -ra line
do
test $i -eq 1 && ((i=i+1)) && continue
for col_val in ${line[#]}
do
echo -n "$col_val|"
done
echo
done < "$csvFile"

Inspect null character from Bash's read command

I am on a system that does not have hexdump. I know there's a null character on STDIN, but I want to show/prove it. I've got Ruby on the system. I've found that I can directly print it like this:
$ printf 'before\000after' | (ruby -e "stdin_contents = STDIN.read_nonblock(10000) rescue nil; puts 'stdin contents: ' + stdin_contents.inspect")
stdin contents: "before\x00after"
However, I need to run this inside of a bash script i.e. STDIN is not being directly piped to my script. I have to get it via running read in bash.
When I try to use read to get the stdin characters, it seems to be truncating them and it doesn't work:
$ printf 'before\000after' | (read -r -t 1 -n 1000000; printf "$REPLY" | ruby -e "stdin_contents = STDIN.read_nonblock(10000) rescue nil; puts 'stdin contents: ' + stdin_contents.inspect")
stdin contents: "before"
My question is this: How can I get the full/raw output including the null character from read

Shell Script not appending, instead it's erasing contents

My goal is to curl my newly created API with a list of usernames in a .txt file, then receive the API response, save it to a .json, then create a .csv in the end (To read it easier).
This is my script:
echo "$result" | jq 'del(.id, .Time, .username)' | jq '{andres:[.[]]}' > newresult
Input: sudo bash script.sh usernames.txt
Usernames.txt:
test1
test2
test3
test4
Result:
"id","username"
4,"test4"
Desired Result:
"id","username"
1,"test1"
2,"test2"
3,"test3"
4,"test4"
It creates the files as required, and even saves the result. However, it only outputs 1 Result. I can open the CSV/Json as it's running, and see it's querying for different Usernames, but then when it starts another query, rather than Appending it all to the same file, it deletes the Newresult, Result.json, Results.csv, and creates new ones, meaning in the end, i only end up with a result of one username, rather than a list of 5 for example. Can someone tell me what i'm doing wrong?
Thanks!
Use >> to append to file. Try:
: >results.csv
for ligne in $(seq 1 "$nbrlignes");
do
...
jq -r '
["id", "username"] as $headers
| $headers, (.andres[][] | [.[$headers[]]]) | #csv
' < result.json >> results.csv
done
By using > you overwrite the file each time the loop runs.
Also your script looks like it should be largely rewritten and simplified.

Obtain output from a bash command in Ruby

I'm trying to obtain the output of a bash command. More precisely, I need to store the number of lines that contains a string in a file:
variable_name = AAAAAAA
PATH_TO_SEARCH = .
COMMAND = "grep -R #{variable_name} #{PATH_TO_SEARCH} | wc -l"
To execute the command I tried both methods:
num_lines = %x[ #{COMMAND} ]
num_lines = `#{COMMAND}`
but the problem is: In "num_lines" I have 1) the number of lines that contain the string (OK!) and 2) output from grep like "grep: /home/file_example.txt: No such file or directory" (NO!).
I would like to store just the first output.
Looks like you may just need to suppress the error messages.
"You can use the -s or --no-messages flag to suppress errors." found from How can I have grep not print out 'No such file or directory' errors?

How do I append to an existing string variable inside a loop in bash? [duplicate]

This question already has answers here:
Are shell scripts sensitive to encoding and line endings?
(14 answers)
Closed 9 months ago.
I have a simple bash script that downloads stock prices and appends them to a variable, which is then outputted:
#!/bin/bash
output=" "
for stock in GOOG AAPL
do
price=$(curl -s "http://download.finance.yahoo.com/d/quotes.csv?s=$stock&f=l1")
output+="$stock: $price "
done
echo "$output"
This script only displays AAPL: 524.896, the last piece of data fetched. According to whatswrongwithmyscript, there isn't anything wrong with the script, and I thought I was following this answer properly. This answer discussed a similar problem (appending to a string variable inside a loop) and suggested a different method which I used like this:
#!/bin/bash
output=" "
for stock in GOOG AAPL
do
price=$(curl -s "http://download.finance.yahoo.com/d/quotes.csv?s=$stock&f=l1")
output="$output$stock: $price "
done
echo "$output"
The output is still the same. I'm using bash 4.2.45 on debian jessie x64.
More info
I echoed the result in a loop to debug, and from the first script, this is what I get:
GOOG: 1030.42
AAPL: 524.896
AAPL: 524.896
And the second script gives the same thing:
GOOG: 1030.42
AAPL: 524.896
AAPL: 524.896
When I run your script and pipe the output to od -c, the result is illuminating:
0000000 G O O G : 1 0 3 0 . 4 2 \r
0000020 A A P L : 5 2 4 . 8 9 6 \r \n
0000040
So you can see that it IS in fact getting all the entries and concatenating them, but its ALSO getting CR characters (the \r in the od output), which causes them to print over the top of each other when you print the string.
You can pipe the output of curl to tr -d '\r' to strip off the problematic CRs:
price=$(curl -s "...." | tr -d '\r')
I'm pretty sure that the problem is that curl is returning a carriage return and this is messing with the printing of both values. If you redirect the output of the curl command to a file and view it in vi, you'll see it's created a DOS file.
This seems to work:
#!/bin/bash
output=""
for stock in GOOG AAPL
do
price=$(curl -s "http://download.finance.yahoo.com/d/quotes.csv?s=$stock&f=l1" | tr -d '\r')
output+="$stock $price\n"
done
echo -e "$output"

Resources