Iterative storing items to corresponding files by a given instruction - bash

I want to perform sorting items to corresponding files by a given instruction.
The instruction is in instruction.txt:
item_1 file_5
item_3 file_2
item_6 file_7
item_22 file_2
...
item_m file_n
Items are stored in a contents.txt:
>item_1
blablas
bla
>item_2
blas
...
>item_m
bla
bla
bla
I want the procedure to read the instructions for each item, go to the contents file and extract an item with its contents (including >item_*, excluding next >) and append to the corresponding file_**and save it as file_**_upd.
Would be grateful for the assistance!
P.S. some files belong to same files!

Perl to the rescue!
perl -we '
open my $instruction, "<", "instruction.txt" or die $!;
my %where = map split, <$instruction>;
open my $contents, "<", "contents.txt" or die $!;
my $out;
while (<$contents>) {
open $out, ">", $where{$1} if /^>(.*)/;
print {$out} $_;
}'
open opens a file, "<" means for reading, while ">" means for writing.
The diamond operator <> reads from the file handle, see readline.
The pairs from instructions are saved to an associative table %where (see also map and split).
The contents.txt is read line by line, if a line starts with >, a new output file is created. The file handle of the output is declared outside the loop, so it survives its iterations, so all lines after a > are printed to the same file.
Update: To handle multiple items per file, and also to output items with no file assigned, you need to a bit more work:
perl -we '
open my $instruction, "<", "instruction.txt" or die $!;
my %where = map split, <$instruction>;
open my $contents, "<", "contents.txt" or die $!;
my $out;
my $unknown = "file_unknown";
my %created;
while (<$contents>) {
open $out, $created{ $where{$1} // $unknown }++ ? ">>" : ">",
$where{$1} // $unknown if /^>(.*)/;
print {$out} $_;
}'
The hash %created keeps track of already created files, so they are appended rather than overwriten next time. The defined-or operator // is used to output items with no file assigned to file_unknown.

Related

Remove multiple lines where string occurs and concatenate

I'm new to Bash/Perl and trying to remove multiple lines in a text file where a string occurs. To remove a single line so far I have:
perl -ne '/somestring/ or print' /usr/file.txt > /usr/file1.tmp
To replace a second line I use:
perl -ne '/anotherstring/ or print' /usr/file.txt > /usr/file2.tmp
How can I concatenate file and file2.tmp?
Or how can I modify the command to remove multiple lines where somestring and anotherstring occur?
How can I concatenate file and file2.tmp?
That could be done with
cat file file2.tmp >> file3.tmp
Or if by file you mean file1.tmp,
cat file1.tmp file2.tmp >> file3.tmp
However, that is different from what you're describing in the rest of your question (i.e. removing any line where any of two patterns appears). That could be done by chaining your commands:
perl -ne '/somestring/ or print' /usr/file.txt > /usr/file1.tmp
perl -ne '/anotherstring/ or print' /usr/file1.tmp > /usr/file2.tmp
You can use a pipe to get rid of the intermediate file file1.tmp:
perl -ne '/somestring/ or print' /usr/file.txt | perl -ne '/anotherstring/ or print' > /usr/file2.tmp
This can also be done by using grep (assuming your strings don't make use of any Perl-specific regex features):
grep -v somestring /usr/file.txt | grep -v anotherstring > /usr/file2.tmp
Finally, you can combine the filtering into one command/regex:
perl -ne '/somestring|anotherstring/ or print' /usr/file.txt > /usr/file2.tmp
Or using grep:
grep -v 'somestring\|anotherstring' /usr/file.txt > /usr/file2.tmp
I had some fun with your program, and wrote a highly dynamic Perl program
to print the matches or non-matches for words in each line of any user defined file, and then right the requested lines which match or do not match the file to the screen and to a new user-defined outfile.
We will be parsing this file: iris_dataset.csv:
"Sepal.Length","Sepal.Width","Petal.Length","Petal.Width","Species"
5.1,3.5,1.4,0.2,"setosa"
4.9,3,1.4,0.2,"setosa"
4.8,3,1.4,0.3,"setosa"
5.1,3.8,1.6,0.2,"setosa"
4.6,3.2,1.4,0.2,"setosa"
7,3.2,4.7,1.4,"versicolor"
6.4,3.2,4.5,1.5,"versicolor"
6.9,3.1,4.9,1.5,"versicolor"
6.6,3,4.4,1.4,"versicolor"
5.5,2.4,3.7,1,"versicolor"
6.3,3.3,6,2.5,"virginica"
5.8,2.7,5.1,1.9,"virginica"
7.1,3,5.9,2.1,"virginica"
6.3,2.9,5.6,1.8,"virginica"
5.9,3,5.1,1.8,"virginica"
It's a comma separated value file with columns separated by commas.
You could see each column of items more nicely if you were looking at this file in a spreadsheet. What we will be looking for is Species of the file, so the possible items to match are "setosa", "versicolor", and "virginica".
My program first asks for the file that you want to read from..
In this case, it's iris_dataset.csv, though it could be any file. Then you write the name of a file that you would want to write to. I call it new_iris.csv, but you can call it anything.
Then we tell the program how many items we are looking for, so if there's 3 items I can type: setosa, versicolor, virginica in any order. If there are two I can type only two items, and if there is one, then I can only type only setosa or versicolor or virginica in this example file.
Then we are asked if we want to KEEP the lines which match our items,
or if we want to REMOVE the lines of the file which match our files. If we keep the matches, we get the lines which match those items printed to the screen and to our outfile. If we select remove, we get the lines which do not match those items printed to the screen and to our file. If we select neither KEEP nor REMOVE, then we get an error message and our new empty outfile is deleted since it contains nothing.
#!/usr/bin/env perl
# Program: perl_matching.pl
use strict; # Means that we have to explicitly declare our variables with "my", "our" or "local" as we want their scope defined.
use warnings; # We want to know if and if where errors are showing up in our program.
use feature 'say'; # Like print, but with automatic ending newline.
use feature 'switch'; # Perl given:when switch statement.
no warnings 'experimental'; # Perl has something against switch.
########### This block of code right here is basically equivalent to a unit ls command ##############
opendir(DIR, "."); # Opens the current working directory
my #files = readdir(DIR); # Reads all files in the current working directory into an array #files.
closedir(DIR); # Now that we have the array of files, we can close our current working directory.
say "Here are the list of files in your current working directory";
foreach(#files){print "$_\t";} # $_ is the default variable for each item in an array.
########### It is not critical to run the program ####################
say "\nGive me your filename to read from, extensions and all ..."; # It would be a good idea to have your filename in yoru working directory.
chomp(my $file_read = <STDIN>); # This makes the filename dynamic from user input.
say "Give me your filename to write to, extensions and all ...";
chomp(my $file_write = <STDIN>); # results will be printed to this file, and standard output. # chomp removes newlines from standard input.
# ' < ' to read from, and '>', to write to ...
# Opening your file to read from:
open(my $filehandle_read, '<', $file_read) or die "Problem reading file $_ because $!";
# Open your file to write to.
open(my $filehandle_write, '>', $file_write) or die "Problem reading file $_ because $!";
say "How many matches are you going to give me?";
my $match_num = <STDIN>;
say "Okay give me the matches now, pressing Enter key between each match.";
my $i = 1; # This is our incrementer between matches.
my $matches; # This is each match presented line by line.
my #match_list; # This is our array (list) of $matches
while($i <= $match_num)
{
$matches = <STDIN>; # One match at a time from standard input.
push #match_list, $matches; # Pushes all individual $matches into a list #match_list
$i = $i + 1; # Increase the incrementor by one so this loop don't last forever.
}
chomp(#match_list);
undef($matches); # I am clearing each match, so that I can redefine this variable.
$matches = join('|', #match_list); # " | " is part of a regular expression which means "or" for each item in this scalar matches.
say "This is what your redefined matches variable looks like: $matches";
say "Now you get a choice for your matches";
say "KEEP or REMOVE?"; # if you type Keep (case insensitive) you print only the matches to the new file. If you type Remove (case insensitive) you print only the lines to the newfile which do not contain the matches.
chomp(my $choice = <STDIN>);
my #lines_all = <$filehandle_read>; # The filehandle contains everything in the file, so we can pull all lines of the file to read into an array, where each item in the array is each line of the file opened for reading.
close $filehandle_read; # we can now close the filehandle for the file for reading since we just pulled all the information from it.
# We grep for the matching " =~ " lines of our file to read.
my #lines_matching = grep{$_ =~ m/$matches/} #lines_all;
# We grep for the non-matching " !~ " lines of our file to read.
# Note: $_ is a default variable for every item in the array.
my #lines_not_matching = grep{$_ !~ m/$matches/} #lines_all;
# This is a Perl style switch statement.
# Note: A given::when::when::default switch statement.
# is basically equivalent to ...
# while::if::elsif::else statement.
# In this switch statement only one choice is performed,
# which one depends on if you said "Keep" or "Remove" in your choice.
given($choice)
{
when($choice =~ m/Keep/i) # "i" is for case-insensitive, so Keep, KEEP, kEeP, etc are valid.
{
say #lines_matching; # Print the matching lines to the screen.
print $filehandle_write #lines_matching; # Print the matching lines to the file.
close $filehandle_write; # Close the file now that we are done with it.
}
when($choice =~ m/Remove/i)
{
say #lines_not_matching; # Print the lines that match to the screen.
print $filehandle_write #lines_not_matching; # Print the lines that do not match to the screen.
close $filehandle_write; # Close the file now that we are done with it.
}
default
{
say "You must have selected a choice other than Keep or Remove. Don't do that!";
close $filehandle_write; # Close the file now that we are done with it.
unlink($file_write) or warn "Could not unlink file $file_write"; # If you selected neither keep nor remove, we delete the new file to write to as it contains nothing.
}
}
Here is the script in action:
I ask to Remove the lines which contain versicolor and setosa, so only the lines which contain virginica will be printed to the screen and to my outfile which I called new_iris.csv. Again, I asked for 2 items. Note: As in my program, you can type the words Keep or Remove in any case insensitive manner.
>perl perl_matching.pl
Here are the list of files in your current working directory
. .. iris_dataset.csv perl_matching.pl
Give me your filename to read from, extensions and all ...
iris_dataset.csv
Give me your filename to write to, extensions and all ...
new_iris.csv
How many matches are you going to give me?
2
Okay give me the matches now, pressing Enter key between each match.
setosa
versicolor
This is what your redefined matches variable looks like: setosa|versicolor
Now you get a choice for your matches
KEEP or REMOVE?
Remove
"Sepal.Length","Sepal.Width","Petal.Length","Petal.Width","Species"
6.3,3.3,6,2.5,"virginica"
5.8,2.7,5.1,1.9,"virginica"
7.1,3,5.9,2.1,"virginica"
6.3,2.9,5.6,1.8,"virginica"
5.9,3,5.1,1.8,"virginica"
So only those lines which do not contain the words setosa and versicolor are printed to our file: new_iris.csv:
"Sepal.Length","Sepal.Width","Petal.Length","Petal.Width","Species"
6.3,3.3,6,2.5,"virginica"
5.8,2.7,5.1,1.9,"virginica"
7.1,3,5.9,2.1,"virginica"
6.3,2.9,5.6,1.8,"virginica"
5.9,3,5.1,1.8,"virginica"
I completely enjoy playing with standard input in Perl.
You can use my script to only print the lines of the file which contain
setosa. (You only ask for 1 match.)

Extracting the first two characters from a file in perl into another file

I'm having a little bit of trouble with my code below -- I'm trying to figure out how to open up all these text files (.csv files that end in DIS that all have one line in them) and get the first two characters (these are all numbers) from them and print them into another file of the same name, with a ".number" suffix. Some of these .DIS files don't have anything in them, in which case I want to print "0".
Lastly, I would like to go through each original .DIS file and delete the first 3 characters -- I did this through bash.
my #DIS = <*.DIS>;
foreach my $file (#DIS){
my $name = $file;
my $output = "$name.number";
open(INHANDLE, "< $file") || die("Could not open file");
while(<INHANDLE>){
open(OUT_FILE,">$output") || die;
my $line = $_;
chomp ($line);
my $string = $line;
if ($string eq ""){
print "0";
} else {
print substr($string,0,2);
}
}
system("sed -i 's/\(.\{3\}\)//' $file");
}
When I run this code, I get a list of numbers are concatenated together and empty .DIS.number files. I'm rather new to Perl, so any help would be appreciated!
When I run this code, I get a list of numbers are concatenated together and empty .DIS.number files.
This is because of this line.
print substr($string,0,2);
print defaults to printing to STDOUT (ie. the screen). You need to give it the filehandle to print to.
print OUT_FILE substr($string,0,2);
They're being concatenated because print just prints what you tell it to, it won't put newlines in for you (there are some global variables which can change this, don't mess with them). You have to add the newline yourself.
print OUT_FILE substr($string,0,2), "\n";
As a final note, when working with files in Perl I would suggest using lexical filehandles, Path::Tiny, and autodie. They will avoid a great number of classic problems working with files in Perl.
I suggest you do it like this
Each *.dis file is opened and the contents read into $text. Then a regex substitution is used to remove the first three characters from the string and capture the first two in $1
If the substitution succeeded then the contents of $1 are written to the number file, otherwise the original file is empty (or shorter than two characters) and a zero is written instead. The remaining contents of $text are then written back to the *.dis file
use strict;
use warnings;
use v5.10.1;
use autodie;
for my $dis_file ( glob '*.DIS' ) {
my $text = do {
open my $fh, '<', $dis_file;
<$fh>;
};
my $num_file = "$dis_file.number";
open my $dis_fh, '>', $dis_file;
open my $num_fh, '>', $num_file;
if ( defined $text and $text =~ s/^(..).?// ) {
print $num_fh "$1\n";
print $dis_fh $text;
}
else {
print $num_fh "0\n";
print $dis_fh "-\n";
}
}
this awk script extract the first two chars of each file to it's own file. Empty files expected to have one empty line based on the spec.
awk 'FNR==1{pre=substr($0,1,2);pre=length(pre)==2?pre:0; print pre > FILENAME".number"}' *.DIS
This will remove the first 3 chars
cut -c 4-
Bash for loop will be better to do both, which we'll need to modify the awk script little bit
for f in *.DIS;
do awk 'NR==1{pre=substr($0,1,2);$0=length(pre)==2?pre:0; print}' $f > $f.number;
cut -c 4- $f > $f.cut;
done
explanation: loop through all files in *.DTS, for the first line of each file, try to get first two chars (1,2) of the line ($0) assign to pre. If the length of pre is not two (either the line is empty or with 1 char only) set the line to 0 or else use pre; print the line, output file name will be input file appended with .number suffix. The $0 assignment is a trick to save couple keystrokes since print without arguments prints $0, otherwise you can provide the argument.
Ideally you should quote "$f" since it may contain space in file name...

How to read excel file in shell script

I have an excel file which contains following values.I want to read those values from excel file and pass those values to execute my test.
users1=2
loop1=1
users2=1
loop2=1
Could you please anyone help how can i achieve that?
Using linux you have several choices, but none without using a script language and most likely installing an extra module.
Using Perl you could read Excel files i.e. with this module:
https://metacpan.org/pod/Spreadsheet::Read
Using Python you might want to use:
https://pypi.python.org/pypi/xlrd
And using Ruby you could go for:
https://github.com/zdavatz/spreadsheet/blob/master/GUIDE.md
So whatever you prefer, there are tools to help you.
CSV Format
If you can get your data as CSV (comma separated values) file, then it is even easier, because no extra modules are needed.
For example in Perl, you could use Split function. Now that i roughly know the format of your CSV file, let me give you a simple sample:
#!/usr/bin/perl
use strict;
use warnings;
# put full path to your csv file here
my $file = "/Users/daniel/dev/perl/test.csv";
# open file and read data
open(my $data, '<', $file) or die "Could not read '$file' $!\n";
# loop through all lines of data
while (my $line = <$data>) {
# one line
chomp $line;
# split fields from line by comma
my #fields = split "," , $line;
# get size of split array
my $size = $#fields + 1;
# loop through all fields in array
for (my $i=0; $i < $size; $i++) {
# first element should be user
my $user = $fields[$i];
print "User is $user";
# now check if there is another field following
if (++$i < $size) {
# second field should be loop
my $loop = $fields[$i];
print ", Loop is $loop";
# now here you can call your command
# i used "echo" as test, replace it with whatever
system("echo", $user, $loop);
} else {
# got only user but no loop
print "NO LOOP FOR USER?";
}
print "\n";
}
}
So this goes through all lines of your CSV file looks for User,Loop pairs and passes them to a system command. For this sample i used echo but you should replace this with your command.
Looks like i did you homework :D

Parsing csv file and skip the first 3000 lines

I did this function to modify my csv file :
sub convert
{
# open the output/input file
my $file = $firstname."_lastname_".$age.".csv";
$file =~ /(.+\/)(.+\.csv)/;
my $file_simple = $2;
open my $in, '<', $file or die "can not read the file: $file $!";
open my $out, '>', $outPut."_lastname.csv" or die "can not open the o file: $!";
$_ = <$in>;
# first line
print $out "X,Y,Z,W\n";
while( <$in> )
{
if(/(-?\d+),(-?\d+),(-?\d+),(-?\d+),(-?\d+)/)
{
my $tmp = ($4.$5);
print $out $2.$sep.$3.$sep.$4.$sep.($5/10)."\n";
}
else
{print $out "Error: ".$_;}
}
close $out;
}
I would like to skip the first 3000 lines and i have no idea to do it,it's my first time using perl.
Thank you.
Since you wish to to skip the first 3000 lines, just use next if in tandem with the current line number variable $.:
use strict; use warnings;
my $skip_lines = 3001;
open(my $fh, '<', 'data.dat') or die $!;
while (<$fh>) {
next if $. < $skip_lines;
//process the file
}
close($fh);
Since $. checks the current line number, this program simply tells perl to start at the 3001st line, effectively skipping 3000 lines. As desired.
$. Current line number for the last filehandle accessed. Each
filehandle in Perl counts the number of lines that have been read from
it. (Depending on the value of $/ , Perl's idea of what constitutes a
line may not match yours.) When a line is read from a filehandle (via
readline() or <> ), or when tell() or seek() is called on it, $.
becomes an alias to the line counter for that filehandle. You can
adjust the counter by assigning to $. , but this will not actually
move the seek pointer. Localizing $. will not localize the
filehandle's line count. Instead, it will localize perl's notion of
which filehandle $. is currently aliased to. $. is reset when the
filehandle is closed, but not when an open filehandle is reopened
without an intervening close(). For more details, see I/O Operators in
perlop. Because <> never does an explicit close, line numbers increase
across ARGV files (but see examples in eof). You can also use
HANDLE->input_line_number(EXPR) to access the line counter for a given
filehandle without having to worry about which handle you last
accessed. Mnemonic: many programs use "." to mean the current line
number.
REFERENCE:
http://perldoc.perl.org/perlvar.html

append a text on the top of the file

I want to add a text on the top of my data.txt file, this code add the text at the end of the file. how I can modify this code to write the text on the top of my data.txt file. thanks in advance for any assistance.
open (MYFILE, '>>data.txt');
print MYFILE "Title\n";
close (MYFILE)
perl -pi -e 'print "Title\n" if $. == 1' data.text
Your syntax is slightly off deprecated (thanks, Seth):
open(MYFILE, '>>', "data.txt") or die $!;
You will have to make a full pass through the file and write out the desired data before the existing file contents:
open my $in, '<', $file or die "Can't read old file: $!";
open my $out, '>', "$file.new" or die "Can't write new file: $!";
print $out "# Add this line to the top\n"; # <--- HERE'S THE MAGIC
while( <$in> ) {
print $out $_;
}
close $out;
close $in;
unlink($file);
rename("$file.new", $file);
(gratuitously stolen from the Perl FAQ, then modified)
This will process the file line-by-line so that on large files you don't chew up a ton of memory. But, it's not exactly fast.
Hope that helps.
There is a much simpler one-liner to prepend a block of text to every file. Let's say you have a set of files named body1, body2, body3, etc, to which you want to prepend a block of text contained in a file called header:
cat header | perl -0 -i -pe 'BEGIN {$h = <STDIN>}; print $h' body*
Appending to the top is normally called prepending.
open(M,"<","data.txt");
#m = <M>;
close(M);
open(M,">","data.txt");
print M "foo\n";
print M #m;
close(M);
Alternately open data.txt- for writing and then move data.txt- to data.txt after the close, which has the benefit of being atomic so interruptions cannot leave the data.txt file truncated.
See the Perl FAQ Entry on this topic
perl -ni -e 'print "Title\n" $.==1' filename , this print the answer once

Resources