I have this small function:
writecmd () {
perl -e 'ioctl STDOUT, 0x5412, $_ for split //, do{ chomp($_ = <>); $_ }' ;
}
It prints the text I give it to the STDOUT, and prints it also to the command line buffer.
For example:
[root]$ echo "text" | perl -e 'ioctl STDOUT, 0x5412, $_ for split //, do{ chomp($_ = <>); $_ }' ;
text[root]$ text
How can I make it not output the text to STDOUT but only to the cli buffer?
Or more specifically, I use it to print a variable, and after that I use read to allow users to change that variable while editing it in place instead of writing it all over again.
Thanks.
Seems like the output to the terminal is somehow related to whether the prompt has returned or not when ìoctl is executed. For example, the following works as expected:
use strict;
use warnings;
my $pid = fork();
if ( $pid == 0 ) {
sleep 1;
my $cmd = "ls";
# 0x5412 = TIOCSTI, see c include file: <asm-generic/ioctls.h>
ioctl STDOUT, 0x5412, $_ for split //, $cmd;
}
If I remove the sleep 1, it does not work ( since then there is not enough time for the prompt to return ).
Related
I have a file that is supposed to be JSON objects, one per line. Unfortunately, a miscommunication happened with the creation of the file, and the JSON objects only have a space between them, not a new-line.
I need to fix this by replacing every instance of } { with }\n{.
Should be easy for sed or Perl, right?
sed -e "s/}\s{/}\n{/g" file.in > file.out
perl -pe "s/}\s{/}\n{/g" file.in > file.out
But file.in is actually 4.4 GB which seems to be causing a problem for both of these solutions.
The sed command finishes with a halfway-correct file, but file.out is only 335 MB and is only about the first 1/10th of the input file, cutting off in the middle of a line. It's almost like sed just quit in the middle of the stream. Maybe it's trying to load the entire 4.4 GB file into memory but running out of stack space at around 300MB and silently kills itself.
The Perl command errors with the following message:
[1] 2904 segmentation fault perl -pe "s/}\s{/}\n{/g" file.in > file.out
What else should I try?
Unlike the earlier solutions, this one handles {"x":"} {"}.
use strict;
use warnings;
use feature qw( say );
use JSON::XS qw( );
use constant READ_SIZE => 64*1024*1024;
my $j_in = JSON::XS->new->utf8;
my $j_out = JSON::XS->new;
binmode STDIN;
binmode STDOUT, ':encoding(UTF-8)';
while (1) {
my $rv = sysread(\*STDIN, my $block, READ_SIZE);
die($!) if !defined($rv);
last if !$rv;
$j_in->incr_parse($block);
while (my $o = $j_in->incr_parse()) {
say $j_out->encode($o);
}
}
die("Bad data") if $j_in->incr_text !~ /^\s*\z/;
The default input record separator in Perl is \n, but you can change it to any character you want. For this problem, you could use { (octal 173).
perl -0173 -pe 's/}\s{/}\n{/g' file.in > file.out
perl -ple 'BEGIN{$/=qq/} {/;$\=qq/}\n{/}undef$\ if eof' <input >output
Assuming your input doesn't contain } { pairs in other contexts that you do not want replaced, ll you need is:
awk -v RS='} {' '{ORS=(RT ? "}\n{" : "\n")} 1'
e.g.
$ printf '{foo} {bar}' | awk -v RS='} {' '{ORS=(RT ? "}\n{" : "\n")} 1'
{foo}
{bar}
The above uses GNU awk for multi-char RS and RT and will work on any size input file as it does not read the whole file into memory at one time, just each } {-separated "line" one at a time.
You may read input in blocks/chunks and process them one by one.
use strict;
use warnings;
binmode(STDIN);
binmode(STDOUT);
my $CHUNK=0x2000; # 8kiB
my $buffer = '';
while( sysread(STDIN, $buffer, $CHUNK, length($buffer))) {
$buffer =~ s/\}\s\{/}\n{/sg;
if( length($buffer) > $CHUNK) { # More than one chunk buffered
syswrite( STDOUT, $buffer, $CHUNK); # write FIRST of buffered chunks
substr($buffer,0,$CHUNK,''); # remove FIRST of buffered chunks from buffer
}
}
syswrite( STDOUT, $buffer) if length($buffer);
In bash :
#!/bin/bash
var=$(cat ps.txt)
for i in $var ; do
echo $i
done
and ps.txt is :
356735
535687
547568537
7345673
3653468
2376958764
12345678
12345
Now I want to do that with perl or i want to know how to save the output of a command in a variable in perl like var=$(cat ps.txt)
Instead of using cat to get file contents into a Perl variable, you should use open and <> in "slurp mode":
open my $fh, "<", "ps.txt" or die "Failed to open ps.txt: $!";
local $/;
my $file_contents = <$fh>;
Here are some ways to do it:
#!/usr/bin/perl
$ifile = "ps.txt";
# capture command output
# NOTE: this puts each line in a separate array element -- the newline is _not_
# stripped
#bycat = (`cat $ifile`);
# this strips the newline from all array elements:
chomp(#bycat);
# so would this:
# NOTE: for this type of foreach, if you modify $buf, it also modifies the
# corresponding array element
foreach $buf (#bycat) {
chomp($buf);
}
# read in all elements line-by-line
open($fin,"<$ifile") or die("unable to open '$ifile' -- $!\n");
while ($buf = <$fin>) {
chomp($buf);
push(#byread,$buf);
}
close($fin);
# print the arrays
# NOTE: we are passing the arrays "by-reference"
show("bycat",\#bycat);
show("byread",\#byread);
# show -- dump the array
sub show
# sym -- name of array
# ptr -- reference to array
{
my($sym,$ptr) = #_;
my($buf);
foreach $buf (#$ptr) {
printf("%s: %s\n",$sym,$buf);
}
}
I'm not sure what this is trying to achieve, but this is my answer:
my $var = `/bin/cat $0`; # the Perl program itself ;-)
print $var;
If you need the lines, $var can be split on $/.
#! /usr/bin/perl -w
my $var = `/bin/cat $0`;
print $var;
my $n = 1;
for my $line ( split( $/, $var ) ){
print "$n: $line\n";
$n++;
}
I want to port my perl application to windows.
Currently it calls out to "grep" to delete text in a text file, like so:
system("grep -v '$mcadd' $ARGV[0] >> $ARGV[0].bak");
system("mv $ARGV[0].bak $ARGV[0]");
This works perfectly well in ubuntu, but I'm not sure (a) how to modify my perl script to achieve the same effect on Windows, and (b) whether there is a way to achieve the effect in a way that will work in both environments.
Other way to delete text in perl?
You can use perl's inplace editing facility.
~/pperl_programs$ cat data.txt
hello world
goodbye mars
goodbye perl6
back to perl5
Run this:
use strict;
use warnings;
use 5.020;
my $fname = 'data.txt';
#Always use three arg form of open().
#Don't use bareword filehandles.
#open my $INFILE, '<', $fname
# or die "Couldn't open $fname: $!";
{
local $^I = ".bak"; #Turn on inplace editing for this block only
local #ARGV = $fname; #Set #ARGV for this block only
while (my $line = <>) { #"diamond operator" reads from #ARGV
if ($line !~ /hello/) {
print $line; #This does not go to STDOUT--it goes to a new file that perl creates for you.
}
}
} #Return $^I and #ARGV to their previous values
#close $INFILE;
Here is the result:
$ cat data.txt
goodbye mars
goodbye perl6
back to perl5
With inplace editing turned on, perl takes care of creating a new file, sending print() output to the new file, then when you are done, renaming the new file to the original file name, and saving a copy of the original file with a .bak extension.
system("perl -n -e 'if(\$_ !~ /$mcadd/) { print \$_; }' \$ARGV[0] >> \$ARGV[0].bak");
system("rename \$ARGV[0].bak \$ARGV[0]");
This should work in windows.
My code is:
perl -e'
use strict; use warnings;
my $a={};
eval{ test(); };
sub test{
print "11\n";
if( $a->{aa} eq "aa"){
print "aa\n";
}
else{
print "bb\n";
}
}'
Output on Terminal is:
11
Use of uninitialized value in string eq at -e line 9.
bb
If I redirect in file, the output order differ. Why?
perl -e'
...
' > t.log 2>&1
cat t.log:
Use of uninitialized value in string eq at -e line 9.
11
bb
My perl Version:
This is perl 5, version 18, subversion 4 (v5.18.4) built for x86_64-linux-thread-multi
(with 20 registered patches, see perl -V for more detail)
A simpler demonstration of the problem:
$ perl -e'print("abc\n"); warn("def\n");'
abc
def
$ perl -e'print("abc\n"); warn("def\n");' 2>&1 | cat
def
abc
This is due to differences in how STDOUT and STDERR are buffered.
STDERR isn't buffered.
STDOUT flushes its buffer when a newline is encountered if STDOUT is connected to a terminal.
STDOUT flushes its buffer when it's full otherwise.
$| = 1; turns off buffering for STDOUT[1].
$ perl -e'$| = 1; print("abc\n"); warn("def\n");' 2>&1 | cat
abc
def
Actually, the currently selected handle, which is the one print writes to if no handle is specified, which is STDOUT by default.
It's only an autoflush problem NO eval question.
Solution is:
perl -e'
use strict;
use warnings;
$|++; # <== this autoflush print output
my $a={};
test();
sub test{
print "11\n";
if( $a->{aa} eq "aa"){
print "aa\n";
}
else{
print "bb\n";
}
}' > t.log 2>&1
In some cases on terminal is the same problem:
perl -e'print("abc"); print(STDERR "def\n"); print("ghi\n");'
The only save way to get correct order, is turn on autoflush!
#dgw + ikegami ==> thank's
I made the following script:
print "Will accept input until EOF";
while(defined($line = <STDIN>)){
print "Input was $line \n";
if(chomp(#line) eq "end"){
print "aha\n";
last;
}
}
I have 2 questions:
Why when I type end in console I can't see the aha and break from the loop (last is the equal of break right)?
What is the EOF key-combination to stop the while loop? I thought it was ctrl+D in Windows but it does not work.
Your script misses use strict; use warnings;. Otherwise, you would notice that $line is not #line.
Also, chomp does not return the changed string, it changes it in place and returns the number of characters removed.
In MSwin, Ctrl+ZEnter is used as EOF.
Update: Fixed the EOF.
I have modified your code:
use strict;
use warnings;
print "Will accept input until EOF";
while( my $line = <STDIN> ){
chomp $line;
print "Input was $line\n";
if( $line eq 'end'){
print "aha\n";
last;
}
}