I need to process data stored in Amazon S3 and Amazon Glacier with Hadoop / EMR and save the output data in an RDBMS for eg. Vertica
I am a total noob in big data. And I have only gone through few online sessions and ppts about map reduce and sparx. And created few dummy map reduce codes for learning purpose.
Till now I only have commands that let me import data from S3 to HDFC in Amazon EMR and after processing they store them in HDFS files.
So here are my questions:
Is it really mandatory to sync data from S3 to HDFC first before executing map reduce or is there a way to use S3 directly.`
How can I make hadoop access Amazon Glacier data`
And finally how can I store the output to Database.`
Any suggestion / reference is welcome.
EMR clusters are able to read/write to/from S3, so no need to copy data to the cluster. S3 has an implementation as Hadoop FileSystem so it can mostly be treated the same as HDFS.
AFAIK your MR/Spark jobs cannot directly access data from Glacier, data has to first be downloaded from glacier, by itself a lengthy procedure.
Check out Sqoop for pumping data between HDFS and DB
Related
Recently I am setting up my Hadoop cluster over Object Store with S3, all data file are store in S3 instead of HDFS, and I successfully run spark and MP over S3, so I wonder if my namenode is still necessary, if so, what does my namenode do while I am running hadoop application over S3? Thanks.
No, provided you have a means to deal with the fact that S3 lacks the consistency needed by the shipping work committers. Every so often, if S3's listings are inconsistent enough, your results will be invalid and you won't even notice.
Different suppliers of Spark on AWS solve this in their own way. If you are using ASF spark, there is nothing bundled which can do this.
https://www.youtube.com/watch?v=BgHrff5yAQo
I am trying to run a mapreduce job on spot instances.
I launch my instances by using StarClusters and its hadoop plugin. I have no problem upload the data then put it into HDFS and then copy the result back from the HDFS.
My question is that is there way to load the data directly from s3 and push the result back to s3? (I don't want to manually download the data from s3 to HDFS and push the result from HDFS to s3, is there a way to do it in background)?
I am using the standard MIT starcluster ami
you cannot do it, but you can write a script to do that.
for example you can use:
hadoop distcp s3n://ID:key#mybucket/file /user/root/file
to put the file directly to hdfs from s3
I am learning hadoop in a pseudo distributed mode,so not much aware of the cluster. So when browsed about cluster i get that S3 is a data storage device. And EC2 is a computing service,but couldn't understand the real use of it. Will my HDFS be available in S3. If yes when i was learning hive i came across moving data from HDFS to S3 and this is mentioned as a archival logic.
hadoop distcp /data/log_messages/2011/12/02 s3n://ourbucket/logs/2011/12/02
My HDFS is landed on S3 so how would it be beneficial? This might be silly but if some one could give me a overview that would be helpful for me.
S3 is just storage, no computation is allowed. You can think S3 as a bucket which can hold data & you can retrieve data from it using there API.
If you are using AWS/EC2 then your hadoop cluster will be on AWS/EC2, it is different from S3. HDFS is just a file system in hadoop for maximizing input/output performance.
The command which you shared is distributed copy. It will copy data from your hdfs to S3. In short, EC2 will have HDFS as default file system in hadoop environment and you can move archive data or unused data to S3, as S3 storage is cheaper than EC2 machines.
I have a data of about 40 TB in Amazon S3 which I need to analyze using Map Reduce. Our current IT policies do not provide an Amazon EMR account for the same and hence I have to rely on a locally managed Hadoop cluster. I wanted to get an advice on if its advisable to use local Hadoop cluster when our data is actually stored on S3 ?
Please check out https://wiki.apache.org/hadoop/AmazonS3 on how to use S3 as a replacement for HDFS. You can choose either S3 Native FileSystem or S3 Block FileSystem.
We are using Amazon EMR and commoncrawl to perform crawling. EMR writes the output to Amazon S3 in a binary-like format. We'd like to copy that to our local in raw-text format.
How can we achieve that? What's the best way?
Normally we could hadoop copyToLocal but we can't access hadoop directly and the data is on S3.