So, I am trying to run the WordCount hadoop application on Amazon EMR. I have my own data file which I uploaded to abc bucket. I also added the wordcount.jar file under abc bucket. Can anyone tell me when we create the cluster, how can we give the path to the data file and also do we need to give the output directory path as well and if yes then how can I give the output directory path?
The data file is passed in as a parameter to the Jar, the datafile lives on the S3 bucket. The output is also a s3 bucket, in this case you can use the same bucket, just have a directory /output in the bucket and send all output to there.
https://blog.safaribooksonline.com/2013/05/07/running-hadoop-mapreduce-jobs-on-amazon-emr/
"""Our WordCount JAR file will take the JAR’s main file, followed by the bucket name where you uploaded the input data and the output path. Note, that you only have to provide the paths, and not the precise file names. Also, make sure that no output file exists in the output path. The format for specifying input and output paths is: s3n:///path."""
Related
I am creating a CSV file in an ADLS folder.
For example: sample.txt is the file name
instead of a single file, I see sample.txt/..,part-000 files.
My question is is there a method to create sample.txt file instead of a directory in pyspark.
df.write() or df.save() both create folders and multiple files inside that directory.
Using Coalesce(1) I can combine multiple part-000 files into one file. but how to create a single csv file?
Unfortunately, Spark doesn’t support creating a data file without a folder
To workaround,
Firstly using coalesce or repartition, create a single part (partition) file.
df\
.coalesce(1)\
.write\
.format("csv")\
.mode("overwrite")\
.save("mydata")
The above example produces an mydata directory, a part-000* file, and hidden files
However, our data is contained in only one CSV file. The name of this file is not user-friendly. We can rename this file and extract it.
data_location = "/mydata.csv/"
files = dbutils.fs.ls(data_location)
csv_file = [x.path for x in files if x.path.endswith(".csv")][0]
dbutils.fs.mv(csv_file, data_location.rstrip('/') + ".csv")
dbutils.fs.rm(data_location, recurse = True)
set up account key and configure the storage account to access. then move file from databricks location to adls. To move file, we use dbutils.fs.mv
```python
storage_account_name = "Storage account name"
storage_account_access_key = "storage account acesss key"
spark.conf.set("fs.azure.account.key."+storage_account_name+".blob.core.windows.net",storage_account_access_key)
dbutils.fs.cp('/mydata.csv.csv','abfss://demo12#pratikstorage1.dfs.core.windows.net//mydata1.csv')
My Execution:
Output:
I use MultipleOutputs to output data to some absolute paths, instead of a path relative to OutputPath.
Then, i get the error:
Error: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/test/convert.bak/326/201505110030/326-m-00035] for [DFSClient_attempt_1425611626220_29142_m_000035_1001_-370311306_1] on client [192.168.7.146], because this file is already being created by [DFSClient_attempt_1425611626220_29142_m_000035_1000_-53988495_1] on [192.168.7.149] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2320) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2083) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2012) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1963) at
https://issues.apache.org/jira/browse/MAPREDUCE-6357
Output files must in ${mapred.output.dir} 。
The design and implementation dosn't support outputing data to files out of ${mapred.output.dir}.
By looking into stack trace error, it seems that output file is already created.
If you want to write your data into multiple files, then try to generate those file name dynamically and use those files name as shown in code taken from Hadoop Definitive Guide
String basePath = String.format("%s/%s/part", parser.getStationId(), parser.getYear());
multipleOutputs.write(NullWritable.get(), value, basePath);
I hope this will help.
As it clearly suggests that the path you are trying to create,already exists. So try to do a check before creating that path whether that path exists or not.If exists, then delete that path.
FileSystem hdfs;
Path path = new Path (YourHadoopPath);
if (hdfs.exists(path)) {
hdfs.delete(path);
}
Is there a way to copy a list of files from S3 to hdfs instead of complete folder using s3distcp? this is when srcPattern can not work.
I have multiple files on a s3 folder all having different names. I want to copy only specific files to a hdfs directory. I did not find any way to specify multiple source files path to s3distcp.
Workaround that I am currently using is to tell all the file names in srcPattern
hadoop jar s3distcp.jar
--src s3n://bucket/src_folder/
--dest hdfs:///test/output/
--srcPattern '.*somefile.*|.*anotherone.*'
Can this thing work when the number of files is too many? like around 10 000?
hadoop distcp should solve your problem.
we can use distcp to copy data from s3 to hdfs.
And it also supports wildcards and we can provide multiple source paths in the command.
http://hadoop.apache.org/docs/r1.2.1/distcp.html
Go through the usage section in this particular url
Example:
consider you have the following files in s3 bucket(test-bucket) inside test1 folder.
abc.txt
abd.txt
defg.txt
And inside test2 folder you have
hijk.txt
hjikl.txt
xyz.txt
And your hdfs path is hdfs://localhost.localdomain:9000/user/test/
Then distcp command is as follows for a particular pattern.
hadoop distcp s3n://test-bucket/test1/ab*.txt \ s3n://test-bucket/test2/hi*.txt hdfs://localhost.localdomain:9000/user/test/
Yes you can. create a manifest file with all the files you need and use --copyFromManifest option as mentioned here
Now, I use MultipuleOuputs.
I would like to remove the suffix string "-00001" from reducer's output filename such as "xxxx-[r/m]-00001".
Is there any idea?
Thanks.
From Hadoop javadoc to the write() method of MultipleOutputs:
Output path is a unique file generated for the namedOutput. For example, {namedOutput}-(m|r)-{part-number}
So you need to rename or merge these files on the HDFS.
I think you can do it on job driver. When your job completes, change the file names. Also you could do it via terminal commands.
This may sound very basic but I have a folder in HDFS with 3 kinds of files.
eg:
access-02171990
s3.Log
catalina.out
I want my map/reduce to read only files which begin with access- only. How do I do that via program? or specifying via the input directory path?
Please help.
You can set the input path as a glob:
FileInputFormat.addInputPath(jobConf, new Path("/your/path/access*"))