mlcp performs differently to different input directory paths - performance

I am using mlcp v9.0.4 to load data into MarkLogic v9.0.9 and I am trying to figure out the following:
If the csv file is not having data rows and has only the column names, the file never gets loaded. How can I overcome this and load the empty files?
There is a different behaviour of mlcp when the input_file_path is a directory containing csvs vs input_file_path is a directory containing another directory.
Eg: if structure is /dir/dir1/*.csv, then input_file_path=/dir/dir1/ loads faster compared to input_file_path=/dir/ [ with other options set to default ]
What is the logic that mlcp is applying to do the load here?
Should I change any options for both ways to give same result to me?
For point 1:
I could add an empty row to the csv and load it, but I wouldn't want this approach.
I tried using a transform module but that is slowing down the load.
For point 2: I have been trying by changing the mlcp options - batch_size, split_size, max_split_size, thread_count, thread_count_per_split as given in the marklogic docs using different combinations. However, I wonder if I am just beating around bush.
I wanted to understand how mlcp treats the inputs under the hood.
For point 2:
For a 128GB RAM server - Following are the details I tried
File/directory structure:
/dir/dir1/1.csv - 4 MB
/dir/dir1/2.csv - 10 MB
/dir/dir1/3.csv - 400 MB
/dir/dir1/4.csv - 3000 MB
Database configuration:
forest policy - bucket
locking - off
journaling - fast
options file for mlcp:
-generate_uri
true
-fast_load
true
-thread_count
32
-split_size
true
-max_split_size
94371840
-thread_count_per_split
1
-batch_size
100
-transaction_size
20

For point 1) what would expect as the result of loading a file with no data rows ? Considering that the data model is such that 1 CSV 'row' == 1 ML Document. 0 CSV data 'rows' == ??? Documents ? are you expecting a number != 0 ?
For point 2) Could you share the performance difference you are seeing ? What is "loads faster" and what is the final result set look like ?

Related

Jmeter read 2 different csv files in different loops

Currently, i have a requirement where I need to make sure that the data once read is not read again. Earlier I used to use HttpSimpleTableServer when I had to run only one loop with keep=false. However now I need to run 2 loops and for which the above option doesn’t work as the same csv is read from the start agin for the second loop. So I was thinking if there is a way to read data from different csv files per loop. If not how can I make sure that different data is read from the csv for every loop and no data is ever repeated. My Jmeter version is 5.3.
You can use CSV Data Set Config component to read the data from CSV files.
Set the `` flag to false to read the data only once.
You may set the remaining flags based on your need.
You may add two different CSV Data Set Config elements to work with different CSV files.
If you want to handle this programmatically API documentation will be useful. API documenation
If you need to read 2 different files in 2 different loops you should consider going for __CSVRead() function instead
Create 2 files like file0.csv and file1.csv
Once done you will be able to:
${__CSVRead(file${__jm__Thread Group__idx}.csv,0)} - read first column
${__CSVRead(file${__jm__Thread Group__idx}.csv,1)} - read second column
${__CSVRead(file${__jm__Thread Group__idx}.csv,next)} - proceed to next row
etc.
The __CSVRead() function will proceed to the next file on next Thread Group iteration
More information: How to Pick Different CSV Files at JMeter Runtime

Solution to small files bottleneck in hdfs

I have hundreds of thousands of small csv files in hdfs. Before merging them into a single dataframe, I need to add an id to each file individually (or else in the merge it won't be possible to distinguish between data from different files).
Currently I am relying on yarn to distribute the processes that I create that add the id to each file and convert to parquet format. I find that no matter how I tune the cluster (in size/executor/memory) that the bandwidth is limited at 2000-3000 files/h.
for i in range(0,numBatches):
fileSlice = fileList[i*batchSize:((i+1)*batchSize)]
p = ThreadPool(numNodes)
logger.info('\n\n\n --------------- \n\n\n')
logger.info('Starting Batch : ' + str(i))
logger.info('\n\n\n --------------- \n\n\n')
p.map(lambda x: addIdCsv(x), fileSlice)
def addIdCsv(x):
logId=x[logId]
filePath=x[filePath]
fLogRaw = spark.read.option("header", "true").option('inferSchema', 'true').csv(filePath)
fLogRaw = fLogRaw.withColumn('id', F.lit(logId))
fLog.write.mode('overwrite').parquet(filePath + '_added')
You can see that my cluster is underperforming on CPU. But on the YARN manager it is given 100% access to resources.
What is the best was to solve this part of a data pipeline? What is the bottleneck?
Update 1
The jobs are evenly distributed as you can see in the event timeline visualization below.
As per #cricket_007 suggestion, Nifi provides a good easy solution to this problem which is more scalable and integrates better with other frameworks than plain python. The idea is to read the files into Nifi before writing to hdfs (in my case they are in S3). There is still an inherent bottleneck of reading/writing to S3 but has a throughput around 45k files/h.
The flow looks like this.
Most of the work is done in the ReplaceText processor that finds the end of line character '|' and adds the uuid and a newline.

speedup postgresql to add data from text file using Django python script

I am working with server who's configurations are as:
RAM - 56GB
Processor - 2.6 GHz x 16 cores
How to do parallel processing using shell? How to utilize all the cores of processor?
I have to load data from text file which contains millions of entries for example one file contains half million lines data.
I am using django python script to load data in postgresql database.
But it takes lot of time to add data in database even though i have such a good config. server but i don't know how to utilize server resources in parallel so that it takes less time to process data.
Yesterday i had loaded only 15000 lines of data from text file to postgresql and it took nearly 12 hours to do it.
My django python script is as below:
import re
import collections
def SystemType():
filename = raw_input("Enter file Name:")
in_file = file(filename,"r")
out_file = file("SystemType.txt","w+")
for line in in_file:
line = line.decode("unicode_escape")
line = line.encode("ascii","ignore")
values = line.split("\t")
if values[1]:
for list in values[1].strip("wordnetyagowikicategory"):
out_file.write(re.sub("[^\ a-zA-Z()<>\n""]"," ",list))
# Eliminate Duplicate Entries from extracted data using regular expression
def FSystemType():
lines_seen = set()
outfile = open("Output.txt","w+")
infile = open("SystemType.txt","r+")
for line in infile:
if line not in lines_seen:
l = line.lstrip()
# Below reg exp is used to handle Camel Case.
outfile.write(re.sub(r'((?<=[a-z])[A-Z]|(?<!\A)[A-Z](?=[a-z]))', r' \1', l).lower())
lines_seen.add(line)
infile.close()
outfile.close()
sylist=[]
def create_system_type(stname):
syslist=Systemtype.objects.all()
for i in syslist:
sylist.append(str(i.title))
if not stname in sylist:
slu=slugify(stname)
st=Systemtype()
st.title=stname
st.slug=slu
# st.sites=Site.objects.all()[0]
st.save()
print "one ST added."
if you could express your requirements without the code (not every shell programmer can really read phython), possibly we could help here.
e.g. your report of 12 hours for 15000 lines suggests you have a too-busy "for" loop somewhere, and i'd suggest the nested for
for list in values[1]....
what are you trying to strip? individual characters, whole words? ...
then i'd suggest "awk".
If you are able to work out the precise data structure required by Django, you can load the database tables directly, using the psql "copy" command. You could do this by preparing a csv file to load into the db.
There are any number of reasons why loading is slow using your approach. First of all Django has a lot of transactional overhead. Secondly it is not clear in what way you are running the Django code -- is this via the internal testing server? If so you may have to deal with the slowness of that. Finally what makes a fast database is not normally to do with CPU, but rather fast IO and lots of memory.

How does HDFS with append works

Let's assume one is using default block size (128 MB), and there is a file using 130 MB ; so using one full size block and one block with 2 MB. Then 20 MB needs to be appended to the file (total should be now of 150 MB). What happens?
Does HDFS actually resize the size of the last block from 2MB to 22MB? Or create a new block?
How does appending to a file in HDFS deal with conccurency?
Is there risk of dataloss ?
Does HDFS create a third block put the 20+2 MB in it, and delete the block with 2MB. If yes, how does this work concurrently?
According to the latest design document in the Jira issue mentioned before, we find the following answers to your question:
HDFS will append to the last block, not create a new block and copy the data from the old last block. This is not difficult because HDFS just uses a normal filesystem to write these block-files as normal files. Normal file systems have mechanisms for appending new data. Of course, if you fill up the last block, you will create a new block.
Only one single write or append to any file is allowed at the same time in HDFS, so there is no concurrency to handle. This is managed by the namenode. You need to close a file if you want someone else to begin writing to it.
If the last block in a file is not replicated, the append will fail. The append is written to a single replica, who pipelines it to the replicas, similar to a normal write. It seems to me like there is no extra risk of dataloss as compared to a normal write.
Here is a very comprehensive design document about append and it contains concurrency issues.
Current HDFS docs gives a link to that document, so we can assume that it is the recent one. (Document date is 2009)
And the related issue.
Hadoop Distributed File System supports appends to files, and in this case it should add the 20 MB to the 2nd block in your example (the one with 2 MB in it initially). That way you will end up with two blocks, one with 128 MB and one with 22 MB.
This is the reference to the append java docs for HDFS.

Hadoop - how are map-reduce tasks know which part of a file to handle?

I've been starting to learn hadoop, and currently I'm trying to process log files that are not too well structured - in that the value I normally use for the M/R key is typiclly found at the top of the file (once). So basically my mapping function takes that value as key and then scans the rest of the file to aggregate the values needed to be reduced. So a [fake] log might look like this:
## log.1
SOME-KEY
2012-01-01 10:00:01 100
2012-01-02 08:48:56 250
2012-01-03 11:01:56 212
.... many more rows
## log.2
A-DIFFERENT-KEY
2012-01-01 10:05:01 111
2012-01-02 16:46:20 241
2012-01-03 11:01:56 287
.... many more rows
## log.3
SOME-KEY
2012-02-01 09:54:01 16
2012-02-02 05:53:56 333
2012-02-03 16:53:40 208
.... many more rows
I want to accumulate the 3rd column for each key. I have a cluster of several nodes running this job, and so I was bothered by several issues:
1. File Distribution
Given that hadoop's HDFS works in 64Mb blocks (by default), and every file is distributed over the cluster, can I be sure that the correct key will be matched against the proper numbers? That is, if the block containing the key is in one node, and a block containing data for that same key (a different part of the same log) is on a different machine - how does the M/R framework match the two (if at all)?
2. Block Assignment
For text logs such as the ones described, how is each block's cutoff point decided? Is it after a row ends, or exactly at 64Mb (binary)? Does it even matter? This relates to my #1, where my concern is that the proper values are matched with the correct keys over the entire cluster.
3. File structure
What is the optimal file structure (if any) for M/R processing? I'd probably be far less worried if a typical log looked like this:
A-DIFFERENT-KEY 2012-01-01 10:05:01 111
SOME-KEY 2012-01-02 16:46:20 241
SOME-KEY 2012-01-03 11:01:56 287
A-DIFFERENT-KEY 2012-02-01 09:54:01 16
A-DIFFERENT-KEY 2012-02-02 05:53:56 333
A-DIFFERENT-KEY 2012-02-03 16:53:40 208
...
However, the logs are huge and it would be very costly (time) to convert them to the above format. Should I be concerned?
4. Job Distribution
Are the jobs assigned such that only a single JobClient handles an entire file? Rather, how are the keys/values coordinated between all the JobClients? Again, I'm trying to guarentee that my shady log structure still yields correct results.
Given that hadoop's HDFS works in 64Mb blocks (by default), and every file is distributed over the cluster, can I be sure that the correct key will be matched against the proper numbers? That is, if the block containing the key is in one node, and a block containing data for that same key (a different part of the same log) is on a different machine - how does the M/R framework match the two (if at all)?
How the keys and the values are mapped depends on the InputFormat class. Hadoop has a couple of InputFormat classes and custom InputFormat classes can also be defined.
If FileInputFormat is used then the key to the mapper is the file off-set and the value is the line in the input file. In most of cases the file off-set is ignored and the value which is a line in the input file is processed by the mapper. So, by default each line in the log file will be a value to to the mapper.
There might be case where related data in a log file as in the OP might be split across blocks, each block will be processed by a different mapper and Hadoop cannot relate them. One way it to let a single mapper process the complete file by using the FileInputFormat#isSplitable method. This is not an efficient approach if the file size is too large.
For text logs such as the ones described, how is each block's cutoff point decided? Is it after a row ends, or exactly at 64Mb (binary)? Does it even matter? This relates to my #1, where my concern is that the proper values are matched with the correct keys over the entire cluster.
Each block in HDFS by default is exactly 64MB size unless the file size is less than 64MB or the default block size has been modfied, record boundaries are not considered. Some part of the line in the input can be in one block and the rest in another. Hadoop understands record boundaries, so even if a record (line) is split across blocks, it will be still processed by a single mapper only. For this some data transfer might be required from the next block.
Are the jobs assigned such that only a single JobClient handles an entire file? Rather, how are the keys/values coordinated between all the JobClients? Again, I'm trying to guarentee that my shady log structure still yields correct results.
Not exactly clear what the query is. Would suggest to go through some tutorials and get back with queries.

Resources