I have a big dataset divided in files.
I would like to read and process my data one file at the time and for this I have this keras generator:
def myGenerator():
while 1:
rnd = random.randint(1,200)
strRnd = str(rnd)
lenRnd = len(strRnd)
rndPadded = strRnd.rjust(5, '0')
nSearchesInBatch = 100
f = "path/part-" + rndPadded + "*" #read one block of data
data = sqlContext.read.load(f).toPandas()
imax = int(data.shape[0]/nSearchesInBatch) #number of batches that will be created sequentially from the generator
for i in range(imax):
data_batch = data[i*nSearchesInBatch:(i+1)*nSearchesInBatch]
features = data_batch['features']
output = data_batch['output']
yield features, output
The problem is that the reading takes the biggest part (each file is around 200mb), and in the meanwhile the GPU sits waiting, it is possible to pre-read the next batch while the GPU is traning on the previous one?
At the moment one file is read and split in steps (the inner loop), the CPUs are hidden and the GPU training, as soon as the epoch finishes, the GPU goes idle and the cpu start reading (which takes 20/30 seconds).
Any solution to parallelize this?
I have to deal with very big data (Point clouds generally more than 30 000 000 points) using Matlab. I can read ascii data using textscan function. After reading, I need to detect invalid data (points with 0,0,0 coordinates) and then I need to do some mathematical operations on each point or each line in the data. In my way, first I read data with textscan and then I assign this data to a matrix. Secondly, I use for loops for detecting invalid points and doing some mathematical operations on each point or line in the data. A sample of my code is shown as below. According to profile tool of Matlab textscan takes 37% and line
transformed_list((i:i),(1:4)) = coordinate_list((i:i),(1:4))*t_matrix;
takes 35% of all computation time.
I tried it with another point cloud (stores around 5 500 000) and profile tool reported same results. Is there a way of avoiding for loops, or is there another way of speeding up this computation?
fileID = fopen('C:\Users\Mustafa\Desktop\ptx_all_data\dede5.ptx');
original_data = textscan(fileID,'%f %f %f %f %f %f %f', 'delimiter',' ');
fclose(fileID);
column = original_data{1}(1);
row = original_data{1}(2);
t_matrix = [original_data{1}(7) original_data{2}(7) original_data{3}(7) original_data{4}(7)
original_data{1}(8) original_data{2}(8) original_data{3}(8) original_data{4}(8)
original_data{1}(9) original_data{2}(9) original_data{3}(9) original_data{4}(9)
original_data{1}(10) original_data{2}(10) original_data{3}(10) original_data{4}(10)];
coordinate_list(:,1) = original_data{1}(11:length(original_data{1}));
coordinate_list(:,2) = original_data{2}(11:length(original_data{2}));
coordinate_list(:,3) = original_data{3}(11:length(original_data{3}));
coordinate_list(:,4) = 0;
coordinate_list(:,5) = original_data{4}(11:length(original_data{4}));
transformed_list = zeros(length(coordinate_list),5);
for i = 1:length(coordinate_list)
if coordinate_list(i,1) == 0 && coordinate_list(i,2) == 0 && coordinate_list(i,3) == 0
transformed_list(i,:) = NaN;
else
%transformed_list(i,:) = coordinate_list(i,:)*t_matrix;
transformed_list((i:i),(1:4)) = coordinate_list((i:i),(1:4))*t_matrix;
transformed_list(i,5) = coordinate_list(i,5);
end
%i
end
Thanks in advance
for loops with conditional statements like those will take ages to run. But what Matlab lacks in loop speed it makes up with vectorization and indexing.
Let's try some logical indexing like this to solve the first step:
coordinate_list(coordinate_list(:,1) == 0 .* ...
coordinate_list(:,2) == 0 .* ...
coordinate_list(:,3) == 0)=nan;
And then vectorize the second statement:
transformed_list(:,(1:4)) = coordinate_list(:,(1:4))*t_matrix;
As EBH mentioned above this might be a bit heavy on your RAM. If it's more than your computer can handle asks yourself if the coordinates really have to be doubles, maybe single precision will do. If that still doesn't do, try slicing the vector and performing the operation in parts.
Small example to give you an idea because I had a 2million element point cloud around here:
In R2015a
transformed_list = zeros(length(coordinate_list),5);
tic
for i = 1:length(coordinate_list)
if coordinate_list(i,1) == 0 && coordinate_list(i,2) == 0 && coordinate_list(i,3) == 0
transformed_list(i,:) = NaN;
else
%transformed_list(i,:) = coordinate_list(i,:)*t_matrix;
transformed_list((i:i),(1:3)) = coordinate_list((i:i),(1:3))*t_matrix;
transformed_list(i,5) = 1;
end
%i
end
toc
Returns Elapsed time is 10.928142 seconds.
transformed_list=coordinate_list;
tic
coordinate_list(coordinate_list(:,1) == 0 .* ...
coordinate_list(:,2) == 0 .* ...
coordinate_list(:,3) == 0)=nan;
transformed_list(:,(1:3)) = coordinate_list(:,(1:3))*t_matrix;
toc
Returns Elapsed time is 0.101696 seconds.
Rather than read the whole file, you'd be better off using a loop with
fscanf(fileID, '%f', 7)
and processing input as you read it.
I know that the question isn't new but I haven't found anything useful. In my case I have a 20 GB file and I need to read random lines from it. Now I have simple file index which contains line numbers and corresponding seek offsets. Also I disabled buffering when reading to read only the needed line.
And this is my code:
def create_random_file_gen(file_path, batch_size=0, dtype=np.float32, delimiter=','):
index = load_file_index(file_path)
if (batch_size > len(index)) or (batch_size == 0):
batch_size = len(index)
lines_indices = np.random.random_integers(0, len(index), batch_size)
with io.open(file_path, 'rb', buffering=0) as f:
for line_index in lines_indices:
f.seek(index[line_index])
line = f.readline(2048)
yield __get_features_from_line(line, delimiter, dtype)
The problem is that it's extremely slow: reading of 5000 lines takes 89 seconds on my Mac(here I point to ssd drive). There is code I used for testing:
features_gen = tedlium_random_speech_gen(5000) # just a wrapper for function given above
i = 0
for feature, cls in features_gen:
if i % 1000 == 0:
print("Got %d features" % i)
i += 1
print("Total %d features" % i)
I've read something about files memory mapping but I don't really understand how it works: how the mapping works in essence and will it speed up the process or no.
So the main question what are the possible ways to speed up the process? The only way I see now is to read randomly not every line but blocks of lines.
I have two large files. One of them is an info file(about 270MB and 16,000,000 lines) like this:
1101:10003:17729
1101:10003:19979
1101:10003:23319
1101:10003:24972
1101:10003:2539
1101:10003:28242
1101:10003:28804
The other is a standard FASTQ format(about 27G and 280,000,000 lines) like this:
#ST-E00126:65:H3VJ2CCXX:7:1101:1416:1801 1:N:0:5
NTGCCTGACCGTACCGAGGCTAACCCTAATGAGCTTAATCAAGATGATGCTCGTTATGG
+
AAAFFKKKKKKKKKFKKKKKKKFKKKKAFKKKKKAF7AAFFKFAAFFFKKF7FF<FKK
#ST-E00126:65:H3VJ2CCXX:7:1101:10003:75641:N:0:5
TAAGATAGATAGCCGAGGCTAACCCTAATGAGCTTAATCAAGATGATGCTCGTTATGG
+
AAAFFKKKKKKKKKFKKKKKKKFKKKKAFKKKKKAF7AAFFKFAAFFFKKF7FF<FKK
The FASTQ file uses four lines per sequence. Line 1 begins with a '#' character and is followed by a sequence identifie. For each sequence,this part of the Line 1 is unique.
1101:1416:1801 and 1101:10003:75641
And I want to grab the Line 1 and the next three lines from the FASTQ file according to the info file. Here is my code:
import gzip
import re
count = 0
with open('info_path') as info, open('grab_path','w') as grab:
for i in info:
sample = i.strip()
with gzip.open('fq_path') as fq:
for j in fq:
count += 1
if count%4 == 1:
line = j.strip()
m = re.search(sample,j)
if m != None:
grab.writelines(line+'\n'+fq.next()+fq.next()+fq.next())
count = 0
break
And it works, but because both of these two files have millions of lines, it's inefficient(running one day only get 20,000 lines).
UPDATE at July 6th:
I find that the info file can be read into the memory(thank #tobias_k for reminding me), so I creat a dictionary that the keys are info lines and the values are all 0. After that, I read the FASTQ file every 4 line, use the identifier part as the key,if the value is 0 then return the 4 lines. Here is my code:
import gzip
dic = {}
with open('info_path') as info:
for i in info:
sample = i.strip()
dic[sample] = 0
with gzip.open('fq_path') as fq, open('grap_path',"w") as grab:
for j in fq:
if j[:10] == '#ST-E00126':
line = j.split(':')
match = line[4] +':'+line[5]+':'+line[6][:-2]
if dic.get(match) == 0:
grab.writelines(j+fq.next()+fq.next()+fq.next())
This way is much faster, it takes 20mins to get all the matched lines(about 64,000,000 lines). And I have thought about sorting the FASTQ file first by external sort. Splitting the file that can be read into the memory is ok, my trouble is how to keep the next three lines following the indentifier line while sorting. The Google's answer is to linear these four lines first, but it will take 40mins to do so.
Anyway thanks for your help.
You can sort both files by the identifier (the 1101:1416:1801) part. Even if files do not fit into memory, you can use external sorting.
After this, you can apply a simple merge-like strategy: read both files together and do the matching in the meantime. Something like this (pseudocode):
entry1 = readFromFile1()
entry2 = readFromFile2()
while (none of the files ended)
if (entry1.id == entry2.id)
record match
else if (entry1.id < entry2.id)
entry1 = readFromFile1()
else
entry2 = readFromFile2()
This way entry1.id and entry2.id are always close to each other and you will not miss any matches. At the same time, this approach requires iterating over each file once.
I would like to use fortran to read ultraviolet radiation data that has been produced by the Japan Aerospace Exploration Agency. This data is at a daily and monthly temporal resolution from 2000-2010 at a ~5 km spatial resolution. This question is worth answering as the data could be useful for a number of environment/health projects and is freely available, with proper acknowledgement of source and sharing of preprint of any subsequent publications, from:
ftp://suzaku.eorc.jaxa.jp/pub/GLI/glical/Global_05km/monthly/uvb/
There is a readme file available, which provides instructions on how to read data using fortran as follows:
Instructions for _le files
Header
Read header (size= pixel size *2byte):
character head*14400
read(10,rec=1) head
read(head,'(2i6,2f8.2,f8.4,2e12.5,a1,a8,a1,a40)')
& npixel,nline,lon_min,lat_max,reso,slope,offset,',',
& para,',',outfile
Read data (e.g., fortran77)
parameter(nl=7200, ml=3601)
... open file by "unformatted", "recl=nl*2(byte)" (,"bytereclen")
integer*2 i2buf(nl,ml)
do m=1,ml
read(10,rec=1+m) (i2buf(n,m), n=1,nl)
do n=1,nl
par=i2buf(n,m)*slope+offset
write(6,*) 'PAR[Ein/m^2/day]=',par
enddo
enddo
slope values
par__le : daily PAR [Ein/m^2/day] = DN * 0.01
dpar_le : direct PAR = DN * 0.01
swr__le : daily mean shortwave radiation [W/m^2] = DN * 0.01
tip__le : transmittance of instantaneous PAR at noon = DN * 0.0001
uva__le : daily mean UVA [W/m^2] = DN * 0.001
uvb__le : daily mean UVB [W/m^2] = DN * 0.0001
rpar_le : PAR-range surface reflectance (TOP of canopy/solid surfaces) = DN * 0.0001 (monthly data only)
error values
-1 as signed short integer (int16)
65535 as unsigned short integer (uint16)
Progress so far
I have downloaded and installed gfortran successfully on mac OSX. I have downloaded a test file (MOD02SSH_A20000224Av6_v601_7200_3601_uvb__le.gz) and decompressed it. I have created a program file:
PROGRAM readuvr
IMPLICIT NONE
!some code
END PROGRAM
I will then type the following into the command line to create an executable and run it to extract the data.
gfortran -o executable
./executable
As a complete beginner to fortran, my question is: how can I use the instructions provided to build a program that can read the data and output it into a text file?
Well, that file expands to 51,868,800 bytes. The comments imply the header is 14,400 bytes, which leaves 51,854,400 bytes of actual data payload.
There seem to be 7200 lines of data, so that means there are 7202 bytes per line. There seem to be 2 bytes (16-bit samples) so if we assume 2 bytes/sample, that means there are 3601 samples per line, which matches the ml=3601.
So basically, you need to read 14,400 bytes of header, then 7200 lines of data, each line consisting of 3601 values, each of those being 2 bytes wide...
Actually, if you are that unfamiliar with FORTRAN, you may like to extract the data with Perl which is already installed and available on OS X anyway. I have started a VERY SIMPLISTIC Perl program that reads the dat and prints the first 2 values on each line:
#!/usr/bin/perl
use strict;
use warnings;
# Read 14,400 bytes of header
my $buffer;
my $nBytes = 14400;
my $bytesRead = read (STDIN, $buffer, $nBytes) ;
my ($npixel,$nline,$lon_min,$lat_max,$reso,$slope,$offset,$junk)=split(' ',$buffer);
print "npixel:$npixel\n";
print "nline:$nline\n";
print "lon_min:$lon_min\n";
print "lat_max:$lat_max\n";
print "reso:$reso\n";
print "slope:$slope\n";
$offset =~ s/,.*//; # strip trailing comma and junk
print "offset:$offset\n";
# Read actual lines of data
my $line;
for(my $m=1;$m<=$nline;$m++){
read(STDIN,$line,$npixel*2);
my $x=$npixel*2;
my #values=unpack("S$x",$line);
printf "Line: %d",$m;
for(my $j=0;$j<2;$j++){
printf ",%f",$values[$j]*$slope+$offset;
}
printf "\n"; # newline
}
Save it as go.pl and then in the Terminal, type the following once to make it executable
chmod +x go.pl
and then run it like this
./go.pl < MOD02SSH_A20000224Av6_v601_7200_3601_uvb__le
Sample output extract:
npixel:7200
nline:3601
lon_min:0.00
lat_max:90.00
reso:0.0500
slope:0.10000E-03
offset:0.00000E+00
...
...
Line: 3306,0.099800,0.099800
Line: 3307,0.099900,0.099900
Line: 3308,0.099400,0.074200
Line: 3309,0.098900,0.098900
Line: 3310,0.098400,0.098400
Line: 3311,0.074300,0.074200
Line: 3312,0.071300,0.071200
fortran (f2003 or so) solution. (The linked instructions are awful by the way )
implicit none
character*80 para,outfile
character(len=:),allocatable::header,infile
integer npixel,nline,blen,i
c note kind=2 is not standard. This needs to be a 2-byte integer.
integer(kind=2),allocatable :: data(:,:)
real lon_min,lat_max,reso,slope,off
c header is plain text, so first open formatted and
c directly read header data
infile='MOD02SSH_A20000224Av6_v601_7200_3601_uvb__le'
open(10,file=infile)
read(10,*)npixel,nline,lon_min,lat_max,reso,slope,off,
$ para,outfile
close(10)
write(*,*)npixel,nline,lon_min,lat_max,reso,slope,off,
$ trim(para),' ',trim(outfile)
blen=2*npixel
allocate(character(len=blen)::header)
allocate(data(npixel,nline))
if( sizeof(data(1,1)).ne.2 )then
write(*,*)'error kind=2 did not give a 2 byte integer'
stop
endif
c now close and reopen for binary read.
c direct access approach:
open(20,file=infile,access='direct',recl=blen/4)
c note the granularity of the recl= specifier is not standard.
c ifort uses 4 bytes. (note this will break if npixel is not even )
read(20,rec=1)header
write(*,*)trim(header)
do i=1,nline
read(20,rec=i+1)data(:,i)
enddo
c note streams if available is simpler: (we don't need to know rec len )
c open(20,file=infile,access='stream')
c read(20)header,data
end
This is not actually validated because I don't have known file content to compare against.