How to execute -fs in hadoop pig - hadoop

I want to get the output files from the hdfs to my local storage so i ran this code in my pig script
Fs -get user/miner/adhoc/results/mine1.txt /home/miner/jeweler/results
Unfortunately the executing the code returns error 2997: encountered ioexception
I also saw default bootup file /var/lib/hadoop-yarn/.pigbootup not found
Do i need to import something or do i need to set certain properties in my pig script?

It seems your path is incorrect which gives IOException. Root slash is missing in your path. Correct path: /user/miner/adhoc/results/mine1.txt
You can try this also:
fs -copyToLocal /user/miner/adhoc/results/mine1.txt /home/miner/jeweler/results

Related

Error in Hadoop mv command for empty directory

Does hadoop filesystem shell moving of empty directory?
Assume that I have a below directory which is empty.
hadoop fs -mv /user/abc/* /user/xyz/*
When I am executing the above command , it is giving me the error
'/user/abc/*' does not exists.
However, If I put some data inside /user/abc/* , it is getting executed successfully.
Does anyone know how to handle for empty directory?
Is there any alternative to execute above command without giving error?
hadoop fs -mv /user/abc/* /user/xyz
The destination file doesn't need to add /*
I thinks you want to rename the file.
you also can use this ->
hadoop fs -mv /user/abc /user/xyz
Because you xyz file is empty,so you don't got error.
but if you xyz file has many file,you will get error as well.
This answer should be correct I believe.
hadoop fs -mv /user/abc /user/xyz
'*' is a wild card. So it's looking for any file inside the folder. When nothing found, it returns the error.
As per the command,
When you move a file, all links to otherfiles remain intact, except when youmove it to a different file system.

Hadoop namenode format error

I am trying to configure hadoop on ubuntu but when executing the command bin/hadoop namenode -format it shows the following message
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it,
/home/sonali/hadoop-2.2.0/bin/hdfs: line 201:
/usr/lib/jvm/java-6-openjdk-i386/bin/java: No such file or directory
line 201: /usr/lib/jvm/java-6-openjdk-i386/bin/java: No such file or directory
The problem seems to be with Java. Try to cd to above path and then it will throw error as well(most probably).
So you need to set JAVA_HOME properly in your .bashrc file.
You can try setting JAVA_HOME = /usr/lib/jvm/java-6-openjdk-i386
Make sure that your JAVA_HOME in hadoop_base_dir/etc/hadoop/hadoop-env.sh is right

ERROR 2118: Input path does not exist

i am running pig script from shell script and i am concatenating 50 files and putting it in hdfs, but when i try to load the file using pig script i am getting error as
ERROR 2118: Input path does not exist:
but the file is there and when i try to delete the file i am getting an error message in hue that is :
Cannot perform operation. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
[Errno 2] File /user/cloudera/xxxx/xxxx not found
Please help as i am struggling in this.
i am using cloudera 5.7
Only hdfs user or directory owner can delete files from HDFS. So create new user called hdfs, then try to do your operations
If you want to do this on CLI try below
sudo -u hdfs hdfs dfs -rmr /path/to/file

WordCount command can't find file location

I run Wordcount in Eclipse and my text file exists in hdfs
Eclipse shows me this error:
Input path does not exist: file:/home/hduser/workspace/sample1user/hduser/test1
Input path does not exist: file:/home/hduser/workspace/sample1user/hduser/test1
Your error shows that the wordcount is searching for the file in local filesystem and not in hdfs. Try copying the input file in local file system.
Post the results for following commands in your question:
hdfs dfs -ls /home/hduser/workspace/sample1user/hduser/test1
hdfs dfs -ls /home/hduser/workspace/sample1user/hduser
ls -l /home/hduser/workspace/sample1user/hduser/test1
ls -l /home/hduser/workspace/sample1user/hduser
I too ran into the similar issue..(I am a beginner too) I gave the full hdfs path via Arguments for the Wordcount program like below and it worked (I was running the Pseudo-Distributed mode)
hdfs://krishl#localhost:9000/user/Perumal/Input hdfs://krish#localhost:9000/user/Perumal/Output
hdfs://krish#localhost:9000 is my hdfs location and My hadoop daemons were running during the testing.
Note: This may not be the best practice but it helped me get started!!

Executing copyFromLocal command from one machine to another machine

I am executing following command:
hadoop fs -copyFromLocal /tmp/temp/pattern_BS.conf hdfs://wihadoopn301p.prod.ch3.s.com:/user/hdfs/hadoop/qa2/BS/
In this I am trying to copy pattern_BS.conf in /tmp/temp folder on local drive, into hdfs://wihadoopn301p.prod.ch3.s.com:/user/hdfs/hadoop/qa2/BS/ location.
But it giving following error:
copyFromLocal: For input string: ""
Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>]
Please help me out in solving this problem.
I think you should use the command as given below, because copyFromLocal expects first argument to be a local file and the second argument to be a location in hdfs
hadoop fs -copyFromLocal /tmp/temp/pattern_BS.conf /user/hdfs/hadoop/qa2/BS/

Resources