Why can't I terminate this while loop? - windows

I made the following script:
print "Will accept input until EOF";
while(defined($line = <STDIN>)){
print "Input was $line \n";
if(chomp(#line) eq "end"){
print "aha\n";
last;
}
}
I have 2 questions:
Why when I type end in console I can't see the aha and break from the loop (last is the equal of break right)?
What is the EOF key-combination to stop the while loop? I thought it was ctrl+D in Windows but it does not work.

Your script misses use strict; use warnings;. Otherwise, you would notice that $line is not #line.
Also, chomp does not return the changed string, it changes it in place and returns the number of characters removed.
In MSwin, Ctrl+ZEnter is used as EOF.
Update: Fixed the EOF.

I have modified your code:
use strict;
use warnings;
print "Will accept input until EOF";
while( my $line = <STDIN> ){
chomp $line;
print "Input was $line\n";
if( $line eq 'end'){
print "aha\n";
last;
}
}

Related

Pasting text to terminal

I have this small function:
writecmd () {
perl -e 'ioctl STDOUT, 0x5412, $_ for split //, do{ chomp($_ = <>); $_ }' ;
}
It prints the text I give it to the STDOUT, and prints it also to the command line buffer.
For example:
[root]$ echo "text" | perl -e 'ioctl STDOUT, 0x5412, $_ for split //, do{ chomp($_ = <>); $_ }' ;
text[root]$ text
How can I make it not output the text to STDOUT but only to the cli buffer?
Or more specifically, I use it to print a variable, and after that I use read to allow users to change that variable while editing it in place instead of writing it all over again.
Thanks.
Seems like the output to the terminal is somehow related to whether the prompt has returned or not when ìoctl is executed. For example, the following works as expected:
use strict;
use warnings;
my $pid = fork();
if ( $pid == 0 ) {
sleep 1;
my $cmd = "ls";
# 0x5412 = TIOCSTI, see c include file: <asm-generic/ioctls.h>
ioctl STDOUT, 0x5412, $_ for split //, $cmd;
}
If I remove the sleep 1, it does not work ( since then there is not enough time for the prompt to return ).

change bash script code to perl

In bash :
#!/bin/bash
var=$(cat ps.txt)
for i in $var ; do
echo $i
done
and ps.txt is :
356735
535687
547568537
7345673
3653468
2376958764
12345678
12345
Now I want to do that with perl or i want to know how to save the output of a command in a variable in perl like var=$(cat ps.txt)
Instead of using cat to get file contents into a Perl variable, you should use open and <> in "slurp mode":
open my $fh, "<", "ps.txt" or die "Failed to open ps.txt: $!";
local $/;
my $file_contents = <$fh>;
Here are some ways to do it:
#!/usr/bin/perl
$ifile = "ps.txt";
# capture command output
# NOTE: this puts each line in a separate array element -- the newline is _not_
# stripped
#bycat = (`cat $ifile`);
# this strips the newline from all array elements:
chomp(#bycat);
# so would this:
# NOTE: for this type of foreach, if you modify $buf, it also modifies the
# corresponding array element
foreach $buf (#bycat) {
chomp($buf);
}
# read in all elements line-by-line
open($fin,"<$ifile") or die("unable to open '$ifile' -- $!\n");
while ($buf = <$fin>) {
chomp($buf);
push(#byread,$buf);
}
close($fin);
# print the arrays
# NOTE: we are passing the arrays "by-reference"
show("bycat",\#bycat);
show("byread",\#byread);
# show -- dump the array
sub show
# sym -- name of array
# ptr -- reference to array
{
my($sym,$ptr) = #_;
my($buf);
foreach $buf (#$ptr) {
printf("%s: %s\n",$sym,$buf);
}
}
I'm not sure what this is trying to achieve, but this is my answer:
my $var = `/bin/cat $0`; # the Perl program itself ;-)
print $var;
If you need the lines, $var can be split on $/.
#! /usr/bin/perl -w
my $var = `/bin/cat $0`;
print $var;
my $n = 1;
for my $line ( split( $/, $var ) ){
print "$n: $line\n";
$n++;
}

Extracting the first two characters from a file in perl into another file

I'm having a little bit of trouble with my code below -- I'm trying to figure out how to open up all these text files (.csv files that end in DIS that all have one line in them) and get the first two characters (these are all numbers) from them and print them into another file of the same name, with a ".number" suffix. Some of these .DIS files don't have anything in them, in which case I want to print "0".
Lastly, I would like to go through each original .DIS file and delete the first 3 characters -- I did this through bash.
my #DIS = <*.DIS>;
foreach my $file (#DIS){
my $name = $file;
my $output = "$name.number";
open(INHANDLE, "< $file") || die("Could not open file");
while(<INHANDLE>){
open(OUT_FILE,">$output") || die;
my $line = $_;
chomp ($line);
my $string = $line;
if ($string eq ""){
print "0";
} else {
print substr($string,0,2);
}
}
system("sed -i 's/\(.\{3\}\)//' $file");
}
When I run this code, I get a list of numbers are concatenated together and empty .DIS.number files. I'm rather new to Perl, so any help would be appreciated!
When I run this code, I get a list of numbers are concatenated together and empty .DIS.number files.
This is because of this line.
print substr($string,0,2);
print defaults to printing to STDOUT (ie. the screen). You need to give it the filehandle to print to.
print OUT_FILE substr($string,0,2);
They're being concatenated because print just prints what you tell it to, it won't put newlines in for you (there are some global variables which can change this, don't mess with them). You have to add the newline yourself.
print OUT_FILE substr($string,0,2), "\n";
As a final note, when working with files in Perl I would suggest using lexical filehandles, Path::Tiny, and autodie. They will avoid a great number of classic problems working with files in Perl.
I suggest you do it like this
Each *.dis file is opened and the contents read into $text. Then a regex substitution is used to remove the first three characters from the string and capture the first two in $1
If the substitution succeeded then the contents of $1 are written to the number file, otherwise the original file is empty (or shorter than two characters) and a zero is written instead. The remaining contents of $text are then written back to the *.dis file
use strict;
use warnings;
use v5.10.1;
use autodie;
for my $dis_file ( glob '*.DIS' ) {
my $text = do {
open my $fh, '<', $dis_file;
<$fh>;
};
my $num_file = "$dis_file.number";
open my $dis_fh, '>', $dis_file;
open my $num_fh, '>', $num_file;
if ( defined $text and $text =~ s/^(..).?// ) {
print $num_fh "$1\n";
print $dis_fh $text;
}
else {
print $num_fh "0\n";
print $dis_fh "-\n";
}
}
this awk script extract the first two chars of each file to it's own file. Empty files expected to have one empty line based on the spec.
awk 'FNR==1{pre=substr($0,1,2);pre=length(pre)==2?pre:0; print pre > FILENAME".number"}' *.DIS
This will remove the first 3 chars
cut -c 4-
Bash for loop will be better to do both, which we'll need to modify the awk script little bit
for f in *.DIS;
do awk 'NR==1{pre=substr($0,1,2);$0=length(pre)==2?pre:0; print}' $f > $f.number;
cut -c 4- $f > $f.cut;
done
explanation: loop through all files in *.DTS, for the first line of each file, try to get first two chars (1,2) of the line ($0) assign to pre. If the length of pre is not two (either the line is empty or with 1 char only) set the line to 0 or else use pre; print the line, output file name will be input file appended with .number suffix. The $0 assignment is a trick to save couple keystrokes since print without arguments prints $0, otherwise you can provide the argument.
Ideally you should quote "$f" since it may contain space in file name...

How to edit previous line from current in text file?

So what I need exactly.
I have a file that I looping line by line and when I'll found the word "search" I need to return on previous line and change the word "false" to "true" inside that line, but only on that line not for all file. I'm newbie in bash and that all that I have.
file="/u01/MyFile.txt"
count=0
while read line
do
((count++))
if [[ $line == *"[search]"* ]]
then
?????????????
fi
done < $file
You could do the whole thing in pure bash like this:
# Declare a function process_file doing the stuff
process_file() {
# Always have the previous line ready, hold off printing
# until we know if it needs to be changed.
read prev
while read line; do
if [[ $line == *"[search]"* ]]; then
# substitute false with true in $prev. Use ${prev//false/true} if
# several occurrences may need to be replaced.
echo "${prev/false/true}"
else
echo "$prev"
fi
# remember current line as previous for next turn
prev="$line"
done
# in the end, print the last line (it was saved as $prev) in the last
# loop iteration.
echo "$prev"
}
# call function, feed file to it.
process_file < file
However, there are tools that are better suited to this sort of file processing than pure bash and that are commonly used in shell scripts: awk and sed. These tools process a file by reading line after line1 from it and running a piece of code for each line individually, preserving some state between lines (not unlike the code above) and come with more powerful text processing facilities.
For this, I'd use awk:
awk 'index($0, "[search]") { sub(/false/, "true", prev) } NR != 1 { print prev } { prev = $0 } END { print prev }' filename
That is:
index($0, "[search]") { # if the currently processed line contains
sub(/false/, "true", prev) # "[search]", replace false with true in the
# saved previous line. (use gsub if more than
# one occurrence may have to be replaced)
}
NR != 1 { # then, unless we're processing the first line
# and don't have a previous line,
print prev # print the previous line
}
{ # then, for all lines:
prev = $0 # remember it as previous line for the next turn
}
END { # and after the last line was processed,
print prev # print the last line (that we just saved
# as prev)
}
You could also use sed:
sed '/\[search\]/ { x; s/false/true/; x; }; x; ${ p; x; }; 1d' filename
...but as you can see, sed is somewhat more cryptic. It has its strengths, but this problem doesn't play to them.
Addendum, as requested: The main thing to know is that sed reads line into something called the pattern space (on which most commands operate) and has a hold buffer on the side where you can save things between lines. We'll use the hold buffer to hold the current previous line. The code works as follows:
/\[search\]/ { # if the currently processed line contains [search]
x # eXchange pattern space (PS) and hold buffer (HB)
s/false/true/ # replace false with true in the pattern space
x # swap back. This changed false to true in the PS.
# Use s/false/true/g for multiple occurrences.
}
x # swap pattern space, hold buffer (the previous line
# is now in the PS, the current in the HB)
${ # if we're processing the last line,
p # print the PS
x # swap again (current line is now in PS)
}
1d # If we're processing the first line, the PS now holds
# the empty line that was originally in the HB. Don't
# print that.
# We're dropping off the end here, and since we didn't
# disable auto-print, the PS will be printed now.
# That is the previous line except if we're processing
# the last line (then it's the last line)
Well, I did warn you that sed is somewhat more cryptic than awk. A caveat of this code is that it expects the input file to have more than one line.
1 In awk's case, it's records that don't have to be lines but are lines by default.
A very simple approach would be to read 2 lines at a time and then check for the condition in the second line and replace the previous line.
while read prev_line # reads every 1st line
do
read curr_line # reads every 2nd line
if [[ $curr_line == *"[search]"* ]]; then
echo "${prev_line/false/true}"
echo "$curr_line
else
echo "$prev_line"
echo "$curr_line"
fi
done < "file.txt"
The correct version of your way of doing this would be:
file="/u01/MyFile.txt"
count=0
while read line
do
((count++))
if [[ $line == *"[search]"* ]]
then
sed -i.bak "$((count-1))s/true/false/" $file
fi
done < $file

Parsing csv file and skip the first 3000 lines

I did this function to modify my csv file :
sub convert
{
# open the output/input file
my $file = $firstname."_lastname_".$age.".csv";
$file =~ /(.+\/)(.+\.csv)/;
my $file_simple = $2;
open my $in, '<', $file or die "can not read the file: $file $!";
open my $out, '>', $outPut."_lastname.csv" or die "can not open the o file: $!";
$_ = <$in>;
# first line
print $out "X,Y,Z,W\n";
while( <$in> )
{
if(/(-?\d+),(-?\d+),(-?\d+),(-?\d+),(-?\d+)/)
{
my $tmp = ($4.$5);
print $out $2.$sep.$3.$sep.$4.$sep.($5/10)."\n";
}
else
{print $out "Error: ".$_;}
}
close $out;
}
I would like to skip the first 3000 lines and i have no idea to do it,it's my first time using perl.
Thank you.
Since you wish to to skip the first 3000 lines, just use next if in tandem with the current line number variable $.:
use strict; use warnings;
my $skip_lines = 3001;
open(my $fh, '<', 'data.dat') or die $!;
while (<$fh>) {
next if $. < $skip_lines;
//process the file
}
close($fh);
Since $. checks the current line number, this program simply tells perl to start at the 3001st line, effectively skipping 3000 lines. As desired.
$. Current line number for the last filehandle accessed. Each
filehandle in Perl counts the number of lines that have been read from
it. (Depending on the value of $/ , Perl's idea of what constitutes a
line may not match yours.) When a line is read from a filehandle (via
readline() or <> ), or when tell() or seek() is called on it, $.
becomes an alias to the line counter for that filehandle. You can
adjust the counter by assigning to $. , but this will not actually
move the seek pointer. Localizing $. will not localize the
filehandle's line count. Instead, it will localize perl's notion of
which filehandle $. is currently aliased to. $. is reset when the
filehandle is closed, but not when an open filehandle is reopened
without an intervening close(). For more details, see I/O Operators in
perlop. Because <> never does an explicit close, line numbers increase
across ARGV files (but see examples in eof). You can also use
HANDLE->input_line_number(EXPR) to access the line counter for a given
filehandle without having to worry about which handle you last
accessed. Mnemonic: many programs use "." to mean the current line
number.
REFERENCE:
http://perldoc.perl.org/perlvar.html

Resources