Why output on perl eval differ between common bash output and redirection STDOUT + STDERR into file? - bash

My code is:
perl -e'
use strict; use warnings;
my $a={};
eval{ test(); };
sub test{
print "11\n";
if( $a->{aa} eq "aa"){
print "aa\n";
}
else{
print "bb\n";
}
}'
Output on Terminal is:
11
Use of uninitialized value in string eq at -e line 9.
bb
If I redirect in file, the output order differ. Why?
perl -e'
...
' > t.log 2>&1
cat t.log:
Use of uninitialized value in string eq at -e line 9.
11
bb
My perl Version:
This is perl 5, version 18, subversion 4 (v5.18.4) built for x86_64-linux-thread-multi
(with 20 registered patches, see perl -V for more detail)

A simpler demonstration of the problem:
$ perl -e'print("abc\n"); warn("def\n");'
abc
def
$ perl -e'print("abc\n"); warn("def\n");' 2>&1 | cat
def
abc
This is due to differences in how STDOUT and STDERR are buffered.
STDERR isn't buffered.
STDOUT flushes its buffer when a newline is encountered if STDOUT is connected to a terminal.
STDOUT flushes its buffer when it's full otherwise.
$| = 1; turns off buffering for STDOUT[1].
$ perl -e'$| = 1; print("abc\n"); warn("def\n");' 2>&1 | cat
abc
def
Actually, the currently selected handle, which is the one print writes to if no handle is specified, which is STDOUT by default.

It's only an autoflush problem NO eval question.
Solution is:
perl -e'
use strict;
use warnings;
$|++; # <== this autoflush print output
my $a={};
test();
sub test{
print "11\n";
if( $a->{aa} eq "aa"){
print "aa\n";
}
else{
print "bb\n";
}
}' > t.log 2>&1
In some cases on terminal is the same problem:
perl -e'print("abc"); print(STDERR "def\n"); print("ghi\n");'
The only save way to get correct order, is turn on autoflush!
#dgw + ikegami ==> thank's

Related

how to execute sed or awk commands in tcl without errors

i wanted to print lines between two matches using sed or awk in tcl.
since sed is fastest way to do the job, i am tring to use sed in tcl. temp is the input file.
temp file:
Hello
1
2
3
work
4 5
6
7
sed -n '/Hello/,/work /p' temp
awk 's/Hello/,/work /' temp
this is working in shell, now i want to use this in tcl file,
tclsh exec sed -n {/Hello/,/work/p} temp
tclsh exec awk {/Hello/,/work/} temp
it is giving error as:
Missing }.
excepted output:
Hello
1
2
3
work
what am i missing here ?
Just do it in pure tcl instead of trying to run an external program:
#!/usr/bin/env tclsh
# Print text in between and including lines matching the regular
# expressions `begin` and `end`
proc print_between {filename begin end} {
set inf [open $filename]
set in_block false
while {[gets $inf line] >= 0} {
if {[regexp -- $begin $line]} {
set in_block true
}
if {$in_block} {
puts $line
if {[regexp -- $end $line]} {
set in_block false
}
}
}
close $inf
}
print_between temp Hello work

Use `sed` to replace text in code block with output of command at the top of the code block

I have a markdown file that has snippets of code resembling the following example:
```
$ cat docs/code_sample.sh
#!/usr/bin/env bash
echo "Hello, world"
```
This means there there's a file at the location docs/code_sample.sh, whose contents is:
#!/usr/bin/env bash
echo "Hello, world"
I'd like to parse the markdown file with sed (awk or perl works too) and replace the bottom section of the code snippet with whatever the above bash command evaluates to, for example whatever cat docs/code_sample.sh evaluates to.
Perl to the rescue!
perl -0777 -pe 's/(?<=```\n)^(\$ (.*)\n\n)(?^s:.*?)(?=```)/"$1".qx($2)/meg' < input > output
-0777 slurps the whole file into memory
-p prints the input after processing
s/PATTERN/REPLACEMENT/ works similarly to a substitution in sed
/g replaces globally, i.e. as many times as it can
/m makes ^ match start of each line instead of start of the whole input string
/e evaluates the replacement as code
(?<=```\n) means "preceded by three backquotes and a newline"
(?^s:.*?) changes the behaviour of . to match newlines as well, so it matches (frugally because of the *?) the rest of the preformatted block
(?=```) means "followed by three backquotes`
qx runs the parameter in a shell and returns its output
A sed-only solution is easier if you have the GNU version with an e command.
That said, here's a quick, simplistic, and kinda clumsy version I knocked out that doesn't bother to check the values of previous or following lines - it just assumes your format is good, and bulls through without any looping or anything else. Still, for my example code, it worked.
I started by making an a, a b, and an x that is the markup file.
$: cat a
#! /bin/bash
echo "Hello, World!"
$: cat b
#! /bin/bash
echo "SCREW YOU!!!!"
$: cat x
```
$ cat a
foo
bar
" b a z ! "
```
```
$ cat b
foo
bar
" b a z ! "
```
Then I wrote s which is the sed script.
$: cat s
#! /bin/env bash
sed -En '
/^```$/,/^```$/ {
# for the lines starting with the $ prompt
/^[$] / {
# save the command to the hold space
x
# write the ``` header to the pattern space
s/.*/```/
# print the fabricated header
p
# swap the command back in
x
# the next line should be blank - add it to the current pattern space
N
# first print the line of code as-is with the (assumed) following blank line
p
# scrub the $ (prompt) off the command
s/^[$] //
# execute the command - store the output into the pattern space
e
# print the output
p
# put the markdown footer back
s/.*/```/
# and print that
p
}
# for the (to be discarded) existing lines of "content"
/^[^`$]/d
}
' $*
It does the job and might get you started.
$: s x
```
$ cat a
#! /bin/bash
echo "Hello, World!"
```
```
$ cat b
#! /bin/bash
echo "SCREW YOU!!!!"
```
Lots of caveats - better to actually check that the $ follows a line of backticks and is followed by a blank line, maybe make sure nothing bogus could be in the file to get executed... but this does what you asked, with (GNU) sed.
Good luck.
A rare case when use of getline would be appropriate:
$ cat tst.awk
state == "importing" {
while ( (getline line < $NF) > 0 ) {
print line
}
close($NF)
state = "imported"
}
$0 == "```" { state = (state ? "" : "importing") }
state != "imported" { print }
$ awk -f tst.awk file
See http://awk.freeshell.org/AllAboutGetline for getline uses and caveats.

Splitting very long (4GB) string with new lines

I have a file that is supposed to be JSON objects, one per line. Unfortunately, a miscommunication happened with the creation of the file, and the JSON objects only have a space between them, not a new-line.
I need to fix this by replacing every instance of } { with }\n{.
Should be easy for sed or Perl, right?
sed -e "s/}\s{/}\n{/g" file.in > file.out
perl -pe "s/}\s{/}\n{/g" file.in > file.out
But file.in is actually 4.4 GB which seems to be causing a problem for both of these solutions.
The sed command finishes with a halfway-correct file, but file.out is only 335 MB and is only about the first 1/10th of the input file, cutting off in the middle of a line. It's almost like sed just quit in the middle of the stream. Maybe it's trying to load the entire 4.4 GB file into memory but running out of stack space at around 300MB and silently kills itself.
The Perl command errors with the following message:
[1] 2904 segmentation fault perl -pe "s/}\s{/}\n{/g" file.in > file.out
What else should I try?
Unlike the earlier solutions, this one handles {"x":"} {"}.
use strict;
use warnings;
use feature qw( say );
use JSON::XS qw( );
use constant READ_SIZE => 64*1024*1024;
my $j_in = JSON::XS->new->utf8;
my $j_out = JSON::XS->new;
binmode STDIN;
binmode STDOUT, ':encoding(UTF-8)';
while (1) {
my $rv = sysread(\*STDIN, my $block, READ_SIZE);
die($!) if !defined($rv);
last if !$rv;
$j_in->incr_parse($block);
while (my $o = $j_in->incr_parse()) {
say $j_out->encode($o);
}
}
die("Bad data") if $j_in->incr_text !~ /^\s*\z/;
The default input record separator in Perl is \n, but you can change it to any character you want. For this problem, you could use { (octal 173).
perl -0173 -pe 's/}\s{/}\n{/g' file.in > file.out
perl -ple 'BEGIN{$/=qq/} {/;$\=qq/}\n{/}undef$\ if eof' <input >output
Assuming your input doesn't contain } { pairs in other contexts that you do not want replaced, ll you need is:
awk -v RS='} {' '{ORS=(RT ? "}\n{" : "\n")} 1'
e.g.
$ printf '{foo} {bar}' | awk -v RS='} {' '{ORS=(RT ? "}\n{" : "\n")} 1'
{foo}
{bar}
The above uses GNU awk for multi-char RS and RT and will work on any size input file as it does not read the whole file into memory at one time, just each } {-separated "line" one at a time.
You may read input in blocks/chunks and process them one by one.
use strict;
use warnings;
binmode(STDIN);
binmode(STDOUT);
my $CHUNK=0x2000; # 8kiB
my $buffer = '';
while( sysread(STDIN, $buffer, $CHUNK, length($buffer))) {
$buffer =~ s/\}\s\{/}\n{/sg;
if( length($buffer) > $CHUNK) { # More than one chunk buffered
syswrite( STDOUT, $buffer, $CHUNK); # write FIRST of buffered chunks
substr($buffer,0,$CHUNK,''); # remove FIRST of buffered chunks from buffer
}
}
syswrite( STDOUT, $buffer) if length($buffer);

Pasting text to terminal

I have this small function:
writecmd () {
perl -e 'ioctl STDOUT, 0x5412, $_ for split //, do{ chomp($_ = <>); $_ }' ;
}
It prints the text I give it to the STDOUT, and prints it also to the command line buffer.
For example:
[root]$ echo "text" | perl -e 'ioctl STDOUT, 0x5412, $_ for split //, do{ chomp($_ = <>); $_ }' ;
text[root]$ text
How can I make it not output the text to STDOUT but only to the cli buffer?
Or more specifically, I use it to print a variable, and after that I use read to allow users to change that variable while editing it in place instead of writing it all over again.
Thanks.
Seems like the output to the terminal is somehow related to whether the prompt has returned or not when ìoctl is executed. For example, the following works as expected:
use strict;
use warnings;
my $pid = fork();
if ( $pid == 0 ) {
sleep 1;
my $cmd = "ls";
# 0x5412 = TIOCSTI, see c include file: <asm-generic/ioctls.h>
ioctl STDOUT, 0x5412, $_ for split //, $cmd;
}
If I remove the sleep 1, it does not work ( since then there is not enough time for the prompt to return ).

Ruby: execute bash command, capture output AND dump to screen at the same time

So my problem is that I need to have the output of running the command dumped to the screen and also capture it in a variable in a ruby script. I know that I can do the second part like this:
some_variable = `./some_kickbutt`
But my problem is that I need it to still print to the console as Hudson captures that output and records it for posterity's sake.
thanks in advance for any ideas...
Just tee the stdout stream to stderr like so:
ruby -e 'var = `ls | tee /dev/stderr`; puts "\nFROM RUBY\n\n"; puts var' | nl
ruby -e 'var = `ls | tee /dev/stderr`; puts "\nFROM RUBY\n\n"; puts var' 2>&1 | nl

Resources