Is the HDFS sink in Flume using a "anti-pattern" with it's default config - hadoop

looking at the HDFS sink default parameters in Apache Flume it seems that this will produce tons of very small files (1 kB rolls). From what I learned about GFS/HDFS is that blocksizes are 64MB and filesizes should rather be gigabytes to make sure everything runs efficiently.
So I'm curious whether the default parameters of Flume are just misleading or whether I missed something else here.
Cheers.

Related

Why is Spark setting partitions to the file size in bytes?

I have a very simple pyspark program that is supposed to read CSV files from S3:
r = sc.textFile('s3a://some-bucket/some-file.csv')
.map(etc... you know the drill...)
This was failing when running a local Spark node (it works in EMR). I was getting OOM errors and GC crashes. Upon further inspection, I realized that the number of partitions was insanely high. In this particular case r.getNumPartitions() would return 2358041.
I realized that that's exactly the size of my file in bytes. This, of course, makes Spark crash miserably.
I've tried different configurations, like chaning mapred.min.split.size:
conf = SparkConf()
conf.setAppName('iRank {}'.format(datetime.now()))
conf.set("mapred.min.split.size", "536870912")
conf.set("mapred.max.split.size", "536870912")
conf.set("mapreduce.input.fileinputformat.split.minsize", "536870912")
I've also tried using repartition or changing passing a partitions argument to textFile, to no avail.
I would love to know what makes Spark think that it's a good idea to derive the number of partitions from the file size.
In general it doesn't. As nicely explained by eliasah in his answer to Spark RDD default number of partitions it uses max of minPartitions (2 if not provided) and splits computed by Hadoop input format.
The latter one will by unreasonably high, only if instructed by the configuration. This suggests that some configuration file interferes with your program.
The possible problem with your code is that you use wrong configuration. Hadoop options should be set using hadoopConfiguration not Spark configuration. It looks like you use Python so you have to use private JavaSparkContext instance:
sc = ... # type: SparkContext
sc._jsc.hadoopConfiguration().setInt("mapred.min.split.size", min_value)
sc._jsc.hadoopConfiguration().setInt("mapred.max.split.size", max_value)
There was actually a bug in Hadoop 2.6 which would do this; the initial S3A release didn't provide a block size to Spark to split up, the default of "0" meant one-byte-per-job.
Later version should all take fs.s3a.block.size as the config option specifying the block size...something like 33554432 (= 32 MB) would be a start.
If you are using Hadoop 2.6.x. Don't use S3A. That's my recommendation.

Understanding file handling in hadoop

I am new to Hadoop ecosystem with some basic idea. Please assist on following queries to start with:
If the file size (file that am trying to copy into HDFS) is very big and unable to accommodate with the available commodity hardware in my Hadoop ecosystem system, what can be done? Will the file wait until it gets an empty space or the there is an error?
How to find well in advance or predict the above scenario will occur in a Hadoop production environment where we continue to receive files from outside sources?
How to add a new node to a live HDFS ecosystem? There are many methods but I wanted to know which files I need to alter?
How many blocks does a node have? If I assume that a node is a CPU with storage(HDD-500 MB), RAM(1GB) and a processor(Dual Core). In this scenario is it like 500GB/64? assuming that each block is configured to hold 64 GB RAM
If I copyFromLocal a 1TB file into HDFS, which portion of the file will be placed in which block in which node? How can I know this?
How can I find which record/row of the input file is available in which file of the multiple files split by Hadoop?
What are the purpose of each xmls configured? (core-site.xml,hdfs-site.xml & mapred-site.xml). In a distributed environment, which of these files should be placed in all the slave Data Nodes?
How to know how many map and reduce jobs will run for any read/write activity? Will the write operation always have 0 reducer?
Apologize for asking some of the basic questions. Kindly suggest methods to find answers for all of the above queries.

Adding new files to a running hadoop cluster

consider that you have 10GB data and you want to process them by a MapReduce program using Hadoop. Instead of copying all the 10GB at the beginning to HDFS and then running the program, I want to for example copy 1GB and start the work and gradually add the remaining 9GB during the time. I wonder if it is possible in Hadoop.
Thanks,
Morteza
Unfortunately this is not possible with MapReduce. When you initiate a MapReduce Job, part of the setup process is determining block locations of your input. If the input is only partially there, the setup process will only work on those blocks and wont dynamically add inputs.
If you are looking for a stream processor, have a look at Apache Storm https://storm.apache.org/ or Apache Spark https://spark.apache.org/

Spark/Hadoop throws exception for large LZO files

I'm running an EMR Spark job on some LZO-compressed log-files stored in S3. There are several logfiles stored in the same folder, e.g.:
...
s3://mylogfiles/2014-08-11-00111.lzo
s3://mylogfiles/2014-08-11-00112.lzo
...
In the spark-shell I'm running a job that counts the lines in the files. If I count the lines individually for each file, there is no problem, e.g. like this:
// Works fine
...
sc.textFile("s3://mylogfiles/2014-08-11-00111.lzo").count()
sc.textFile("s3://mylogfiles/2014-08-11-00112.lzo").count()
...
If I use a wild-card to load all the files with a one-liner, I get two kinds of exceptions.
// One-liner throws exceptions
sc.textFile("s3://mylogfiles/*.lzo").count()
The exceptions are:
java.lang.InternalError: lzo1x_decompress_safe returned: -6
at com.hadoop.compression.lzo.LzoDecompressor.decompressBytesDirect(Native Method)
and
java.io.IOException: Compressed length 1362309683 exceeds max block size 67108864 (probably corrupt file)
at com.hadoop.compression.lzo.LzopInputStream.getCompressedData(LzopInputStream.java:291)
It seems to me that the solution is hinted by the text given with the last exception, but I don't know how to proceed. Is there a limit to how big LZO files are allowed to be, or what is the issue?
My question is: Can I run Spark queries that load all LZO-compressed files in an S3 folder, without getting I/O related exceptions?
There are 66 files of roughly 200MB per file.
EDIT:
The exception only occurs when running Spark with Hadoop2 core libs (ami 3.1.0). When running with Hadoop1 core libs (ami 2.4.5), things work fine. Both cases were tested with Spark 1.0.1.
kgeyti's answer works fine, but:
LzoTextInputFormat introduces a performance hit, since it checks for an .index file for each LZO file. This can be especially painful with many LZO files on S3 (I've experienced up to several minutes delay, caused by thousands of requests to S3).
If you know up front that your LZO files are not splittable, a more performant solution is to create a custom, non-splittable input format:
import org.apache.hadoop.fs.Path
import org.apache.hadoop.mapreduce.JobContext
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat
class NonSplittableTextInputFormat extends TextInputFormat {
override def isSplitable(context: JobContext, file: Path): Boolean = false
}
and read the files like this:
context.newAPIHadoopFile("s3://mylogfiles/*.lzo",
classOf[NonSplittableTextInputFormat],
classOf[org.apache.hadoop.io.LongWritable],
classOf[org.apache.hadoop.io.Text])
.map(_._2.toString)
I haven't run into this specific issue myself, but it looks like .textFile expects files to be splittable, much like the Cedrik's problem of Hive insisting on using CombineFileInputFormat
You could either index your lzo files, or try using the LzoTextInputFormat - I'd be interested to hear if that works better on EMR:
sc.newAPIHadoopFile("s3://mylogfiles/*.lz",
classOf[com.hadoop.mapreduce.LzoTextInputFormat],
classOf[org.apache.hadoop.io.LongWritable],
classOf[org.apache.hadoop.io.Text])
.map(_._2.toString) // if you just want a RDD[String] without writing a new InputFormat
.count
yesterday we deployed Hive on a EMR cluster and had the same problem with some LZO files in S3 which have been taken without any problem by another non EMR cluster. After some digging in the logs I noticed, that the map tasks read the S3 files in 250MB chunks, although the files are definitely not splittable.
It turned out that the paramter mapreduce.input.fileinputformat.split.maxsize was set to 250000000 ~ 250MB. That resulted in LZO opening a stream from within a file and a ultimately a corrupt LZO block.
I set the parameter mapreduce.input.fileinputformat.split.maxsize=2000000000 bigger as the maximum file size of our input data and everything works now.
I'm not exactly sure how that correlates to Spark exactly, but changing the InputFormat might help, which seems like the problem in first place, as it has been mentioned in How Amazon EMR Hive Differs from Apache Hive.

Hadoop dfs replicate

Sorry guys,just a simple question but I cannot find exact question on google.
The question about what's dfs.replication mean? If I made one file named filmdata.txt in hdfs, if I set dfs.replication=1,so is it totally one file(one filmdata.txt)?or besides the main file(filmdata.txt) hadoop will create another replication file.
shortly say:if set dfs.replication=1,there are totally one filmdata.txt,or two filmdata.txt?
Thanks in Advance
The total number of files in the file system will be what's specified in the dfs.replication factor. So, if you set dfs.replication=1, then there will be only one copy of the file in the file system.
Check the Apache Documentation for the other configuration parameters.
To ensure high availability of data, Hadoop replicates the data.
When we are storing the files into HDFS, hadoop framework splits the file into set of blocks( 64 MB or 128 MB) and then these blocks will be replicated across the cluster nodes.The configuration dfs.replication is to specify how many replications are required.
The default value for dfs.replication is 3, But this is configurable depends on your cluster setup.
Hope this helps.
The link provided by Praveen is now broken.
Here is the updated link describing the parameter dfs.replication.
Refer Hadoop Cluster Setup. for more information on configuration parameters.
You may want to note that files can span multiple blocks and each block will be replicated number of times specified in dfs.replication (default value is 3). The size of such blocks is specified in the parameter dfs.block.size.
In HDFS framework, we use commodity machines to store the data, these commodity machines are not high end machines like servers with high RAM, there will be a chance of loosing the data-nodes(d1, d2, d3) or a block(b1,b2,b3), as a result HDFS framework splits the each block of data(64MB, 128MB) into three replications(as a default) and each block will be stored in a separate data-nodes(d1, d2, d3). Now consider block(b1) gets corrupted in data-node(d1) the copy of block(b1) is available in data-node(d2) and data-node(d3) as well so that client can request data-node(d2) to process the block(b1) data and provide the result and same as if data-node(d2) fails client can request data-node(d3) to process block(b1) data . This is called-dfs.replication mean.
Hope you got some clarity.

Resources