Redirect / pipe into read command - bash

This is a follow-up to my previous question on SO. I am still trying to command a script deepScript from within another script shallowScript and process its output before display on terminal. Here is a code sample:
deepScript.sh
#!/bin/zsh
print "Hello - this is deepScript"
read "ans?Reading : "
print $ans
shallowScript.sh
#!/bin/zsh
function __process {
while read input; do
echo $input | sed "s/e/E/g"
done }
print "Hello - this is shallowScript"
. ./deepScript.sh |& __process
(edited : outcome of this syntax and of 2 alternatives pasted below)
[UPDATE]
I have tried alternative syntaxes for last redirection . ./deepScript.sh |& __process and each syntax has a different outcome, but of course none of them is the one I want. I'll just paste each syntax and the resulting output of ./shallowScript.sh (where I typed "input" when read was waiting for an input), together with my findings so far.
Option 1 : . ./deepScript.sh |& __process
From this link, it seems that . ./deepScript.sh is run from a subshell, but not __process. Output:
zsh : ./shallowScript.sh
Hello - this is shallowScript
HEllo - this is dEEpScript
input
REading : input
Basically, the first two lines are printed as expected, then instead of printing the prompt REading :, the script directly waits for the stdin input, and then prints the prompt and executes print $ans.
Option 2: __process < <(. ./deepScript.sh)
Zsh's manpage indicates that (. ./deepScript.sh) will run as a subprocess. To me, that looks similar to Option 1. Output:
Hello - this is shallowScript
Reading : HEllo - this is dEEpScript
input
input
So, within . ./deepScript.sh, it prints read's prompt (script line 3) before the print (script line 2). Strange.
Option 3: __process < =(. ./deepScript.sh)
According to the same manpage, (. ./deepScript.sh) here sends its output to a temp file, which is then injected in __process (I don't know if there is a subprocess or not). Output:
Hello - this is shallowScript
Reading : input
HEllo - this is dEEpScript
input
Again, deepScript's line 3 prints to the terminal before line 2, but now it waits for the read to be complete.
Two questions:
Should this be expected?
Is there a fix or a workaround?

The observed delay stems from two factors:
deepScript.sh and process run asynchronously
read reads a complete line before returning
deepScript.sh writes the prompt to standard error, but without a newline. It then waits for your input while process continues to wait for a full line to be written so that its call to read can finish.

Related

How to see syntax errors reported with actual line numbers in the parent script when Perl is embedded within shell script?

For no justifiable reason at all, I have a pretty substantial Perl script embedded within a Bash function that is being invoked within an autoenv .env file.
It looks something like this:
perl='
$inverse = "\e[7m";
$invoff = "\e[27m";
$bold = "\e[1m";
⋮
'
perl -e "$perl" "$inputfile"
I understand that standalone Perl scripts and the PATH variable are a thing, and I understand that Term::ANSIColor is a thing. This is not about that.
My question is, if there's a syntax error in the embedded Perl code, how can I get Perl to report the actual line number within the parent shell script?
For example, say the perl= assignment occurs on line 120 within that file, but there's a syntax error on the 65th line of actual Perl code. I get this:
syntax error at -e line 65, near "s/(#.*)$/$comment\1$endcomment/"
Execution of -e aborted due to compilation errors.
…but I want to see this (the actual line number in the parent script) instead:
syntax error at -e line 185, near "s/(#.*)$/$comment\1$endcomment/"
Things I've tried (that didn't work):
assigning to __LINE__
don't even know why I thought that would work; it's not a variable, it's a constant, and you get an error stating the same
assigning to $. ($INPUT_LINE_NUMBER with use English)
I was pretty sure this wasn't going to work anyway, because this is like NR in Awk, and this clearly isn't what this is for
As described in perlsyn, you can use the following directive to set the line number and (optionally) the file name of the subsequent line:
#line 42 "file.pl"
This means that you could use
#!/bin/sh
perl="#line 4 \"$0\""'
warn("test");
'
perl -e "$perl"
Output:
$ ./a.sh
test at ./a.sh line 4.
There's no clean way to avoid hardcoding the line number when using sh, but it is possible.
#!/bin/sh
script_start=$( perl -ne'if (/^perl=/) { print $.+1; last }' -- "$0" )
perl="#line $script_start \"$0\""'
warn("test");
'
perl -e "$perl"
On the other hand, bash provides the current line number.
#!/bin/bash
script_start=$(( LINENO + 2 ))
perl="#line $script_start \"$0\""'
warn("test");
'
perl -e "$perl"
There is this useful tidbit in the perlrun man page, under the section for -x, which "tells Perl that the program is embedded in a larger chunk of unrelated text, such as in a mail message."
All references to line numbers by the program (warnings, errors, ...) will treat the #! line as the first line. Thus a warning on the 2nd line of the program, which is on the 100th line in the file will be reported as line 2, not as line 100. This can be overridden by using the #line directive. (See Plain Old Comments (Not!) in perlsyn)
Based on the bolded statement, adding #line NNN (where NNN is the actual line number of the parent script where that directive appears) achieves the desired effect:
perl='#line 120
$inverse = "\e[7m";
$invoff = "\e[27m";
$bold = "\e[1m";
⋮
'
⋮

Why does bash dispatch DEBUG signal after the script exit?

I am learning DEBUG signal of bash.
The followings are my test code to regenerate the phenomenon of my question. So it does not have much meaning. Please don't care details.
It prepares two traps, one is called by EXIT signal to clean up temporal script originally but it is a dummy function here. And the second one is called by DEBUG signal to calculate the line where the debugger is scanning.
My question is DEBUG signal may be dispached at clean_up_debugger with LINENO = 0. Why is the LINENO 0 at that time? I add detail of my question after following output by bash -x. Please tell me why it happens.
Thank you very much.
#!/bin/bash
# file name is debug.working
source "bash_debugger_functions.sh"
trap clean_up_debugger_func EXIT
no_of_line_until_here=${LINENO} # *** no_of_line_until_here is 12 ***
trap "show_line_scanned \$(( \${LINENO} - ${no_of_line_until_here} - 1 ))" DEBUG
#!/bin/bash
echo "echo_sring = $1"
The following is a library file
#!/bin/bash
# file name is 'bash_debugger_functions.sh'
clean_up_debugger_func() {
echo "dummy"
}
show_line_scanned() {
echo "At line $1"
}
The following is a part of output by bash -x debug.working
+ (debug.working:17): echo 'echo_sring = test_message'
++ (debug.working:1): show_line_scanned -12
Just after "echo 'echo_sring = test_message'" is called, show_line_scanned is called with negative value, -12. no_of_line_until_here is +12. So it seems LINENO is 0 at that time. I don't know why the show_line_scanned is called here because I supposed that DEBUG signal is dispatched at each line but there is no new line after "echo "echo_sring = $1"". And I would like to know why LINENO is 0 here.
Please teach me the mechanism here.
I supposed that DEBUG signal is dispatched at each line but there is no new line after "echo "echo_sring = $1"".
Still there is a line executed after the last line of your debug.working script, because you set up the trap clean_up_debugger_func EXIT command, and for that clean_up_debugger_func command you get the DEBUG command dispatch which puzzles you.
And I would like to know why LINENO is 0 here.
The execution of the script has just ended at this time, and man bash states about LINENO:
When not in a
script or function, the value substituted is not guaranteed to
be meaningful.

Why the loop of "While ... do ... done" can't read a text file? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
The job I want to do is reading a line from a text file(currently this file only contains a line, the number of lines will be increased later) with the loop of "While ... do ... done". The weird thing is it can only read some of text files. My code is :
...(previous commands to create "myfile.txt")...
while read -r line
do
echo "flag"
done < "myfile.txt"
I have tried a few cases. If I replaced the "myfile.txt" by another file "test.txt" which is created by hand in current directory(this "test.txt" contains one line either), my script can print "flag".
Similarly, after "myfile.txt" has been created, if I modify and save it in current directory, then run my script, it can print "flag" normally either.
Other cases except above two, my script can't print "flag".
I also tried to "chmod" and "touch" the text file in my script, like following, it can't work either.
Obviously, I hope my script read the line(s) of a text file, can anybody please tell me the reason and give a solution ?
BTW, this file can be read by cat command.
...(previous commands to create "myfile.txt")...
chmod 777 "myfile.txt"
touch "myfile.txt"
cat "myfile.txt" #(I can see the results of this line)
while read -r line
do
echo "flag"
done < "myfile.txt"
Thanks !
the whole code of creating the text file is around 800 lines. However, I'd like to post the lines which create my text file. Here they are:
for(i = 1, i<=6, ++i){
...
ofstream myfile("myfile.txt", std::ios_base::app);
...
if(myfile.is_open()){
myfile << "rms_" << std::setprecision(3) << RMS_values ;
myfile.close();
}
}
**************** Beginning of my solution ****************************************
Thanks for above replies.
I have solved by myself and this link : https://unix.stackexchange.com/questions/31807/what-does-the-noeol-indicator-at-the-bottom-of-a-vim-edit-session-mean
The reason is in my script of producing the text file, there is no "\n" at the end. So, the text file has a "[noeol]" icon after the filename when opened in VI.
According to the above link, if there is no "[noeol]", UNIX/LINUX won't read this file.
The solution is rather simple(looking afterwards), just add "<< "\n" " at the end of "cout". The line becomes,
myfile << "rms_" << std::setprecision(3) << RMS_values << "\n";
**************** End of my solution ****************************************
$ cat test.sh
#!/bnin/bash
echo "content" > "myfile.txt"
cat "myfile.txt" #(I can see the results of this line)
while read -r line
do
echo "flag"
done < "myfile.txt"
$ bash test.sh
content
flag
$
It works. There is no problem with it. The script is exact copy of what you posted except the touch is replaced with some content, because the while loop prints one message per line in the file, so if there are no lines (and touch won't add any), it will obviously print nothing.
I'm taking a guess here:
In Unix, two assumptions are made about text files:
All lines end in a <LF> character. If you edit your file on an old, old Mac which used <CR>, Unix won't see the line endings. If you edit a file on Windows programs like Notepad.exe, your lines will end in <CR><LF> and Unix will assume the <CR> is part of the line.
All lines must end in a <LF>, including the last line. If you write a program using a C program, the last line may not end in a <LF> unless you specifically write it out.
Unix utilities like awk, grep, and shells live and breath on these assumptions. When someone usually tells me something doesn't quite work when reading a file using a shell script, I tell them to edit that file in VIM and then save it (thus forcing an ending <LF> character). In VIM, you need to :set ff=unix and then save. That usually takes care of the issue.
My guess is that your file you're reading in doesn't have the correct line endings, and/or that the last line doesn't have that <LF> character on the end.
I don't really understand your question - can you show us more code/how you create the file?
Here is a working example:
$ cat readfile.sh
#!/bin/bash
{
cat <<EOT
this
is
a
test
file
EOT
} > ./test.txt
while read -r line; do
echo "line = [${line}]"
done <./test.txt
.
$ ./readfile.sh
line = [this]
line = [is]
line = [a]
line = [test]
line = [file]

Evolution e-mail client, pipe to program, code always returns 0

I am using "pipe to program" option in Evolution email client, that runs following ruby script
#!/usr/bin/ruby
%% example code below
junk_mail = 2
junk_mail
Now this program always returns 0, irrespective of what the value of junk_mail variable is.
I guess it has something to do with Evolution forking a child process to run this code, and always 0 (clean exit) is received back?
Help needed.
EDIT
I figured out actual problem is with data being read from pipe. Following code works fine when tested in command line, but it is unable to read pipe data when called from Evolution client
#!/usr/bin/ruby
email_txt = ARGF.read
File.open("~/debug.txt", 'a') { |file| file.write(email_txt + "\n") }
$cat email.txt | ./myprog.rb
This gives debug.txt as expected, but when called from Evolution pipe-to-program, it gives empty data.
Am I using the correct way to read piped stream data when called from external program? (I am under Fedora 20).
Use exit:
#!/usr/bin/ruby
junk_mail = 2
exit junk_mail
You can test this by running it from the command line in linux, then echoing the exit value via
echo $?
EDIT
To read STDIN into a single string:
email_txt = STDIN.readlines.join

Bash: "command not found" on simple variable assignment

Here's a simple version of my script which displays the failure:
#!/bin/bash
${something:="false"}
${something_else:="blahblah"}
${name:="file.ext"}
echo ${something}
echo ${something_else}
echo ${name}
When I echo the variables, I get the values I put in, but it also emits an error. What am I doing wrong?
Output:
./test.sh: line 3: blahblah: command not found
./test.sh: line 4: file.ext: command not found
false
blahblah
file.ext
The first two lines are being emitted to stderr, while the next three are being output to stdout.
My platform is fedora 15, bash version 4.2.10.
You can add colon:
: ${something:="false"}
: ${something_else:="blahblah"}
: ${name:="file.ext"}
The trick with a ":" (no-operation command) is that, nothing gets executated, but parameters gets expanded. Personally I don't like this syntax, because for people not knowing this trick the code is difficult to understand.
You can use this as an alternative:
something=${something:-"default value"}
or longer, more portable (but IMHO more readable):
[ "$something" ] || something="default value"
Putting a variable on a line by itself will execute the command stored in the variable. That an assignment is being performed at the same time is incidental.
In short, don't do that.
echo ${something:="false"}
echo ${something_else:="blahblah"}
echo ${name:="file.ext"}
It's simply
variable_name=value
If you use $(variable_name:=value} bash substitutes the variable_name if it is set otherwise it uses the default you specified.

Resources