I have a Hive table that i am trying to index into SolrCloud using morphline, however, the data behind the Hive table is ONE big file 20GB that morphline is taking a long time to process.
Instead of running multiple mappers and reducers, there can only be 1 mapper running, probably due to the fact that we have only one file.
yarn jar /opt/<path>/search-mr-1.0.0-cdh5.5.1-job.jar \
org.apache.solr.hadoop.MapReduceIndexerTool \
--morphline-file morphlines.conf \
--output-dir hdfs://<outputdir> \
--zk-host node1.datafireball.com:2181/solr \
--collection <collectionname> \
--input-list <filewherethedatais> \
--mappers 6
And it still kicked out only 1 job... and this is taking forever, can anyone shed some light on this?
Resources You might find helpful:
Cloudera Mapreduce Batch Index into Solrcloud
Kitesdk which morphline belongs to.
Related
we are using Spark and up until now the output are PSV files. Now in order to save space, we'd like to compress the output. To do so, we will change to save JavaRDD using the SnappyCodec, like this:
objectRDD.saveAsTextFile(rddOutputFolder, org.apache.hadoop.io.compress.SnappyCodec.class);
We will then use Sqoop to import the output into a database. The whole process works fine.
For previously generated PSV files in HDFS, we'd like to compress them in Snappy format as well. This is the command we tried:
hadoop jar /usr/hdp/2.6.5.106-2/hadoop-mapreduce/hadoop-streaming-2.7.3.2.6.5.106-2.jar \
-Dmapred.output.compress=true -Dmapred.compress.map.output=true \
-Dmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec \
-Dmapred.reduce.tasks=0 \
-input input-path \
-output output-path
The command works fine. But the issue is, sqoop can't parse the snappy output files.
When we use a command like "hdfs dfs -text hdfs-file-name" to view the generated files, the output looks like below, with a "index" like field added into each line:
0 2019-05-02|AMRS||5072||||3540||MMPT|0|
41 2019-05-02|AMRS||5538|HK|51218||1000||Dummy|45276|
118 2019-05-02|AMRS||5448|US|51218|TRADING|2282|HFT|NCR|45119|
I.e., an extra value like "0 ", "41 ", "118 " are added into the beginning of each line. Note that the .snappy files generated by Spark doesn't has this "extra-field".
Any idea how to prevent this extra field being inserted?
Thanks a lot!
These are not indexes but rather keys generated by TextInputFormat, as explained here.
The class you supply for the input format should return key/value
pairs of Text class. If you do not specify an input format class, the
TextInputFormat is used as the default. Since the TextInputFormat
returns keys of LongWritable class, which are actually not part of the
input data, the keys will be discarded; only the values will be piped
to the streaming mapper.
And since you do not have any mapper defined in your job, those key/value pairs are written straight out to the file system. So as the above excerpt hints, you need some sort of a mapper that would discard the keys. A quick-and-dirty is to use something already available to serve as a pass-through, like a shell cat command:
hadoop jar /usr/hdp/2.6.5.106-2/hadoop-mapreduce/hadoop-streaming-2.7.3.2.6.5.106-2.jar \
-Dmapred.output.compress=true -Dmapred.compress.map.output=true \
-Dmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec \
-mapper /bin/cat \
-Dmapred.reduce.tasks=0 \
-input input-path \
-output output-path
I have a huge bucket of S3files that I want to put on HDFS. Given the amount of files involved my preferred solution is to use 'distributed copy'. However for some reason I can't get hadoop distcp to take my Amazon S3 credentials. The command I use is:
hadoop distcp -update s3a://[bucket]/[folder]/[filename] hdfs:///some/path/ -D fs.s3a.awsAccessKeyId=[keyid] -D fs.s3a.awsSecretAccessKey=[secretkey] -D fs.s3a.fast.upload=true
However that acts the same as if the '-D' arguments aren't there.
ERROR tools.DistCp: Exception encountered
java.io.InterruptedIOException: doesBucketExist on [bucket]: com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider SharedInstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Unable to load credentials from service endpoint
I've looked at the hadoop distcp documentation, but can't find a solution there on why this isn't working. I've tried -Dfs.s3n.awsAccessKeyId as a flag which didn't work either. I've read how explicitly passing credentials isn't good practice, so maybe this is just some gentil suggestion to do it some other way?
How is one supposed to pass S3 credentials with distcp? Anyone knows?
It appears the format of credentials flags has changed since the previous version. The following command works:
hadoop distcp \
-Dfs.s3a.access.key=[accesskey] \
-Dfs.s3a.secret.key=[secretkey] \
-Dfs.s3a.fast.upload=true \
-update \
s3a://[bucket]/[folder]/[filename] hdfs:///some/path
In case if some one came for with same error using -D hadoop.security.credential.provider.path, please ensure your credentials store(jceks file ) is located in distributed file system(hdfs) as distcp starts form one of the node manager node so it can access the same.
Koen's answer helped me, here is my version.
hadoop distcp \
-Dfs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider \
-Dfs.s3a.access.key=[accesskey] \
-Dfs.s3a.secret.key=[secretkey] \
-Dfs.s3a.session.token=[sessiontoken] \
-Dfs.s3a.fast.upload=true \
hdfs:///some/path s3a://[bucket]/[folder]/[filename]
We are experiencing some performance issues with Solr batch indexing: we have a cluster composed by 4 workers, each of which is equipped with 32 cores and 256GB of RAM. YARN is configured to use 100 vCores and 785.05GB of memory. The HDFS storage is managed by an EMC Isilon system connected through a 10Gb interface. Our cluster runs CDH 5.8.0, features Solr 4.10.3 and it is Kerberized.
With the current setup, speaking of compressed data, we can index about 25GB per day and 500GB per month by using MapReduce jobs. Some of these jobs run daily and they take almost 12 hours to index 15 GB of compressed data. In particular, MorphlineMapper jobs last approximately 5 hours and TreeMergeMapper last about 6 hours.
Are these performances normal? Can you suggest us some tweaks that could improve our indexing performances?
We are using the MapReduceIndexerTool and there are no network problems. We are reading compressed files from HDFS and decompressing them in our morphline. This is the way we run our script:
cmd_hdp=$(
HADOOP_OPTS="-Djava.security.auth.login.config=jaas.conf" hadoop --config /etc/hadoop/conf.cloudera.yarn \
jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar \
org.apache.solr.hadoop.MapReduceIndexerTool \
-D morphlineVariable.ZK_HOST=hostname1:2181/solr \
-D morphlineVariable.COLLECTION=my_collection \
-D mapreduce.map.memory.mb=8192 \
-D mapred.child.java.opts=-Xmx4096m \
-D mapreduce.reduce.java.opts=-Xmx4096m \
-D mapreduce.reduce.memory.mb=8192 \
--output-dir hdfs://isilonhostname:8020/tmp/my_tmp_dir \
--morphline-file morphlines/my_morphline.conf \
--log4j log4j.properties \
--go-live \
--collection my_collection \
--zk-host hostname1:2181/solr \
hdfs://isilonhostname:8020/my_input_dir/
)
The MorphlineMapper phase takes all available resources, the TreeMergeMapper takes only a couple of containers.
We don't need to make queries for the moment, we just need to index historical data. We are wondering if there is a way to speed up indexing time and then optimize collections for searching when indexing is complete.
I'm running Giraph, which executes on our small CDH4 Hadoop cluster of five hosts (four compute nodes and a head node - call them 0-3 and 'w') - see versions below. All five hosts are running mapreduce tasktracker services, and 'w' is also running the jobtracker. Resources are tight for my particular Giraph application (a kind of path-finding), and I've discovered that some configurations of the automatically-scheduled hosts for tasks work better than others.
More specifically, my Giraph command (see below) specifies four Giraph workers, and when executing, Hadoop (Zookeeper actually, IIUC) creates five tasks that I can see in the jobtracker web UI: one master and four slaves. When it puts three or more of the map tasks on 'w' (e.g., 01www or 1wwww), then that host maxes out ram, cpu, and swap, and the job hangs. However, when the system spreads the work out more evenly so that 'w' has only two or fewer tasks (e.g., 123ww or 0321w), then the job finishes fine.
My question is, 1) what program is deciding the task-to-host assignment, and 2) how do I control that?
Thanks very much!
Versions
CDH: 4.7.3
Giraph: Compiled as "giraph-1.0.0-for-hadoop-2.0.0-alpha" (CHANGELOG starts with: Release 1.0.0 - 2013-04-15)
Zookeeper Client environment: zookeeper.version=3.4.5-cdh4.4.0--1, built on 09/04/2013 01:46 GMT
Giraph command
hadoop jar $GIRAPH_HOME/giraph-ex.jar org.apache.giraph.GiraphRunner \
-Dgiraph.zkList=wright.cs.umass.edu:2181 \
-libjars ${LIBJARS} \
relpath.RelPathVertex \
-wc relpath.RelPathWorkerContext \
-mc relpath.RelPathMasterCompute \
-vif relpath.JsonAdjacencyListVertexInputFormat \
-vip $REL_PATH_INPUT \
-of relpath.JsonAdjacencyListTextOutputFormat \
-op $REL_PATH_OUTPUT \
-ca RelPathVertex.path=$REL_PATH_PATH \
-w 4
I am using Hadoop streaming, I start the script as following:
../hadoop/bin/hadoop jar ../hadoop/contrib/streaming/hadoop-streaming-1.0.4.jar \
-mapper ../tests/mapper.php \
-reducer ../tests/reducer.php \
-input data \
-output out
"data" is 2.5 GB txt file.
however in ps axf I can see only one mapper. i tried with -Dmapred.map.tasks=10, but result is the same - single mapper.
how can I make hadoop split my input file and start several mapper processes?
To elaborate on my comments - If your file isn't in HDFS, and you're running with local runner then the file itself will only be processed by a single mapper.
A large file is typically processed by several mappers due to the fact that it is stored in HDFS as several blocks.
A 2.5 GB file, with a block size of 512M will be split into ~5 blocks in HDFS. If the file is splittable (plain text, or using a splittable compression codec such as snappy, but not gzip), then hadoop will launch a mapper per block to process the file.
Hope this helps explain what you're seeing