i am running pig script from shell script and i am concatenating 50 files and putting it in hdfs, but when i try to load the file using pig script i am getting error as
ERROR 2118: Input path does not exist:
but the file is there and when i try to delete the file i am getting an error message in hue that is :
Cannot perform operation. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
[Errno 2] File /user/cloudera/xxxx/xxxx not found
Please help as i am struggling in this.
i am using cloudera 5.7
Only hdfs user or directory owner can delete files from HDFS. So create new user called hdfs, then try to do your operations
If you want to do this on CLI try below
sudo -u hdfs hdfs dfs -rmr /path/to/file
Related
I am trying to upload a file in HDFS with:
sudo -u hdfs hdfs dfs -put /home/hive/warehouse/sample.csv hdfs://[ip_redacted]:9000/data
I can confirm that HDFS works, as I managed to create the /data directory just fine.
Even giving the full path to the .csv file gives the same error:
put: `/home/hive/warehouse/sample.csv': No such file or directory
Why is it giving this error?
I encountered the problem, too.
Because user hdfs has no permission to access one of the file's ancestry directories, so it gave the error No such file or directory.
As crystyxn commentted, using environment variable HADOOP_USER_NAME instead of sudo -u hdfs worked.
Is the csv file in your local system or in HDFS? You can use -put command (or the -copyFromLocal command) ONLY to move a LOCAL file into the distributed file system.
Does hadoop filesystem shell moving of empty directory?
Assume that I have a below directory which is empty.
hadoop fs -mv /user/abc/* /user/xyz/*
When I am executing the above command , it is giving me the error
'/user/abc/*' does not exists.
However, If I put some data inside /user/abc/* , it is getting executed successfully.
Does anyone know how to handle for empty directory?
Is there any alternative to execute above command without giving error?
hadoop fs -mv /user/abc/* /user/xyz
The destination file doesn't need to add /*
I thinks you want to rename the file.
you also can use this ->
hadoop fs -mv /user/abc /user/xyz
Because you xyz file is empty,so you don't got error.
but if you xyz file has many file,you will get error as well.
This answer should be correct I believe.
hadoop fs -mv /user/abc /user/xyz
'*' is a wild card. So it's looking for any file inside the folder. When nothing found, it returns the error.
As per the command,
When you move a file, all links to otherfiles remain intact, except when youmove it to a different file system.
I have created a file with name "file.txt" in the local directory , now I want to put it in HDFS by using :-
]$ hadoop fs -put file.txt abcd
I am getting a response like
put: 'abcd': no such file or directory
I have never worked on Linux. Please help me out - How do I put the file "file.txt" into HDFS?
If you don't specify an absolute path in hadoop (HDFS or wathever other file system used), it will pre-append your user directory to create an absloute path.
By default, in HDFS you default folder should be /user/user name.
Then in your case you are trying to create the file /user/<user name>/abcd and put inside it the content of your local file.txt.
The user name is your operative system user, in your local machine. You can get it using the whoami command.
The the problem is that your user folder doesn't exist in HDFS, and you need to create it.
BTW, according with hadoop documentation, the correct command to work with HDFS is hdfs dfs instead hadoop fs (https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html). But by now both should work.
Then:
If you don't know your user name in your local operative system. Open a terminal and run the whoami command.
Execute the follow command, replacing your user name.
hdfs dfs -mkdir -p /user/<user name>
And then you should be able to execute your PUT command.
NOTE: The -p parameter is to create the /user folder if it doesn't exist.
I run Wordcount in Eclipse and my text file exists in hdfs
Eclipse shows me this error:
Input path does not exist: file:/home/hduser/workspace/sample1user/hduser/test1
Input path does not exist: file:/home/hduser/workspace/sample1user/hduser/test1
Your error shows that the wordcount is searching for the file in local filesystem and not in hdfs. Try copying the input file in local file system.
Post the results for following commands in your question:
hdfs dfs -ls /home/hduser/workspace/sample1user/hduser/test1
hdfs dfs -ls /home/hduser/workspace/sample1user/hduser
ls -l /home/hduser/workspace/sample1user/hduser/test1
ls -l /home/hduser/workspace/sample1user/hduser
I too ran into the similar issue..(I am a beginner too) I gave the full hdfs path via Arguments for the Wordcount program like below and it worked (I was running the Pseudo-Distributed mode)
hdfs://krishl#localhost:9000/user/Perumal/Input hdfs://krish#localhost:9000/user/Perumal/Output
hdfs://krish#localhost:9000 is my hdfs location and My hadoop daemons were running during the testing.
Note: This may not be the best practice but it helped me get started!!
I want to get the output files from the hdfs to my local storage so i ran this code in my pig script
Fs -get user/miner/adhoc/results/mine1.txt /home/miner/jeweler/results
Unfortunately the executing the code returns error 2997: encountered ioexception
I also saw default bootup file /var/lib/hadoop-yarn/.pigbootup not found
Do i need to import something or do i need to set certain properties in my pig script?
It seems your path is incorrect which gives IOException. Root slash is missing in your path. Correct path: /user/miner/adhoc/results/mine1.txt
You can try this also:
fs -copyToLocal /user/miner/adhoc/results/mine1.txt /home/miner/jeweler/results