How to enable additional logging when running `hadoop fs` with MAPRFS? - hadoop

When I run this command:
hadoop fs -copyFromLocal /tmp/1GB.img 'maprfs://maprfs.example.com/tmp/1GB.img'
I get the following errors.
2014-11-05 01:21:08,7669 ERROR Client fs/client/fileclient/cc/writebuf.cc:154 Thread: 240 FlushWrite failed: File 1GB.img, error: Invalid argument(22), pfid 4484.66.266002, off 65536, fid 5189.87.131376
14/11/05 01:21:08 ERROR fs.Inode: Write failed for file: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Marking failure for: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Throwing exception for: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Flush failed for file: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Marking failure for: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Throwing exception for: /tmp/1GB.img, error: Invalid argument
copyFromLocal: 4484.66.266002 /tmp/1GB.img (Invalid argument)
Can anyone suggest how to enable additional verbose/debug logging?
The above errors seem to be coming from the MAPR hadoop classes. It would be nice to enable more verbose logging in those packages, as well as org.apache.*
I tried modifying /opt/mapr/conf/logging.properties but it didn't seem to help.
BTW, running Hadoop 1.0.3 and MapR 3.1.1.26113.GA
thanks,
Fi
p.s.
This is related to my question at http://answers.mapr.com/questions/11374/write-to-maprfs-with-hadoop-cli-fails-inside-docker-while-running-on-a-data-node#

You can also use the option directly
hadoop mfs -Dfs.mapr.trace=DEBUG -ls maprfs://maprfs.example.com/tmp/1GB.img

Found the answer, courtesy of http://answers.mapr.com/answer_link/6181/
Just need a fs.mapr.trace=debug property in
/opt/mapr/hadoop/hadoop-0.20.2/conf/core-site.xml
<configuration>
<property>
<name>fs.mapr.trace</name>
<value>debug</value>
</property>
</configuration>

Related

Dse is not starting stating unable to write to commit log directory

I am getting below error while starting the dse:
ERROR [main] 2020-02-26 13:08:33,269 DseModule.java:97 - {}. Exiting...
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) An exception was caught and reported. Message: Unable to check disk space available to /u01/dse_ops/logs. Perhaps the Cassandra user does not have the necessary permissions
at com.datastax.bdp.DseModule.configure(Unknown Source)

FATAL datanode.DataNode: Exception in secureMain

I'm very new in Hadoop. After following manual http://toodey.com/2015/08/10/hadoop-installation-on-windows-without-cygwin-in-10-mints/ and run my hadoop i got 3 errors:
1) FATAL datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
2) FATAL nodemanager.NodeManager: Error starting NodeManager
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)
3)ERROR namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
I googled many hours but unfortunately no results.
What can be wrong? Thank you in advance.
Solved - Inslalled ubuntu instead of Windows

Error invoking "pig" in cloudera-quickstart-vm-5.8.0

I am a month new to Hadoop environment, I have cloudera-quickstart-vm-5.8.0 on my windows laptop, while invoking 'pig' in cloudera vm, I could not able to enter into grunt shell, the error I am getting is below
[Fatal Error] :-1:-1: Premature end of file. 2017-04-25 06:39:53,207
[main] FATAL org.apache.hadoop.conf.Configuration - error parsing conf
hdfs-default.xml org.xml.sax.SAXParseException; Premature end of file.
Kindly let me know how to resolve this.

Pig permission denied while hdfs file is readable

Running the following commands is sucessful
hadoop fs -ls /path/
hadoop fs -cat /path/.pig_schema
And all the files in that dir has a -rwxr-xr-x permission
However, in the pig console, when running:
A = LOAD '/path/' USING PigStorage();
B = LIMIT A 5;
DUMP B;
Encounters a permission error
2015-08-27 08:47:59,734 [main] ERROR org.apache.pig.tools.grunt.Grunt - You don't have permission to perform the operation. Error from the server: Permission denied
2015-08-27 08:47:59,735 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2017: Internal error creating job configuration.
Any idea why ?
EDIT 1: Added error log
================================================================================ Pig Stack Trace
--------------- ERROR 2017: Internal error creating job configuration.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias B at
org.apache.pig.PigServer.openIterator(PigServer.java:857) at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:746)
at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:320)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:196)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:171)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69) at
org.apache.pig.Main.run(Main.java:543) at
org.apache.pig.Main.main(Main.java:157) Caused by:
org.apache.pig.PigException: ERROR 1002: Unable to store alias B at
org.apache.pig.PigServer.storeEx(PigServer.java:956) at
org.apache.pig.PigServer.store(PigServer.java:919) at
org.apache.pig.PigServer.openIterator(PigServer.java:832) ... 7 more
Caused by:
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
ERROR 2017: Internal error creating job configuration. at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:874)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:297)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:177)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1285) at
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1270)
at org.apache.pig.PigServer.storeEx(PigServer.java:952) ... 9 more
Caused by: java.io.IOException: Permission denied at
java.io.UnixFileSystem.createFileExclusively(Native Method) at
java.io.File.createTempFile(File.java:1879) at
java.io.File.createTempFile(File.java:1923) at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:538)
... 14 more

Error when running Hive

Can any one suggest me why the following error is occurring and how to resolve it??
Not only the below command, running any command related to Hive is returning the same..
hive> show databases;
FAILED: Error in metadata: MetaException(message:Got exception: org.apache.thrif
t.transport.TTransportException java.net.SocketException: Connection reset by pe
er: socket write error)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTa
sk
Check out your hive-site.xml. It is possible your javax.jdo.option.ConnectionURL, the URL for the Hive metastore, isn't right.

Resources