Hadoop RIPE-NCC hadoop-pcap SemanticException - hadoop

I've used this library before with no problems, but now, after re-adding the JAR, I get this error.
Error: Error while compiling statement: FAILED: SemanticException 1:14 Input format must implement InputFormat. Error encountered near token 'pcaps' (state=42000,code=40000)
Since I've used it before, I doubt there's anything wrong with the library. Is there an additional step I forgot about when adding the JAR files? Thank you.

Related

Can't use 'put'() to add data to hbase with happybase

My python version is 3.7, and after I ran pip3 install happybase, I started the command hbase thrift start and tried to write a brief .py file as following:
import happybase
connection = happybase.Connection('master')
table = connection.table('jmlr') #'jmlr' is a table in hbase
for i in table.scan():
print(i)
table.put('001', {'title':'dasds'}) #error here
connection.close()
When it's about to run table.put(), it reported such an error:
thriftpy2.transport.base.TTransportException: TTransportException(type=4, message='TSocket read 0 bytes')
And at the same time, the thrift reported an error:
ERROR [thrift-worker-1] thrift.TBoundedThreadPoolServer: Error occurred during processing of message. java.lang.IllegalArgumentException: Invalid famAndQf provided.
But just now I ran this python file again, it gave me a different error in thrift:
thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Bad version in readMessageBegin
I have tried to add parameters like protocol='compact', transport='framed', but this didn't work, even the table.scan() failed.
Everything in the hbase shell is OK, so I can't figure out what went wrong, I'm about to collapse.
I ran into the same issue and found this sollution. You need to add even empty Column Qualifier ( ':' symbol as delimiter between Column Family and Column Qualifier) into put() method:
table.put('001:', {'title':'dasds'})
Also, you have a different error message after second run of script because thrift server is already failed.
I hope it will help you.

Error: while processing statement: FAILED: Hive Internal Error: hive.mapred.supports.subdirectories must be true

i stumbled in an error
Error while processing statement: FAILED: Hive Internal Error:
hive.mapred.supports.subdirectories must be true if any one of
following is true: hive.optimize.listbucketing ,
mapred.input.dir.recursive and hive.optimize.union.remove.
this error occured when i tried to load data recursively from HDFS directory to hive table
i tried to set following parameters:
SET mapred.input.dir.recursive=true; SET
hive.mapred.supports.subdirectories=true;
SETmapred.input.dir.recursive=true;
but it keeps throwing the same error, what could be wrong?
thanks for the advice
This appears to be an issue with Hue in Cloudera. Currently, I am using CDH 5.11.2 just experienced this issue while trying to set the same statements.
If you connect through beeline (command line) to access hive and perform your set statements and queries there, it should work. I just verified this.

Trying to generate PDF using TCPDF but getting an fatal error?

Fatal error: require_once(): Failed opening required
(mypath)tcpdf/tcpdf.php' (include_path='.:/usr/share/php:/usr/share/pear')
while executing i'm getting the fatal error , Please provide any solution to solve this issue and generate an pdf.
Use this:
require_once('yourpath\tcpdf\tcpdf.php');
and write full path or relative path manually.

PIG Cassandra ERROR 2118 Could not get input splits

I started off trying to do simple pig+cassandra integration with this tutorial from datastax: http://docs.datastax.com/en/datastax_enterprise/4.5/datastax_enterprise/ana/anaPigExRel.html
but when i try to store the result into cql, i get this error:
Message: org.apache.pig.backend.executionengine.ExecException: ERROR
2118: Could not get input splits
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:279)
any ideas whats happening? i read some answers here, referring to changing my PIG_PARTITIONER to Murmur3Partitioner
which i already did and it still happens. is it configuration issue?
export PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner
I found out that after doing:
export PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner
i need to do source ~/.bashrc and do pig from that particular console.
though I get another error, but I think this case is solved.

sas hadoop error - picklist

I am executing SAS program. I have declared CLASSPATH and other variables properly. However when I am defining libname to access Hadoop I am getting error. Please find attached snapshot of sas log.
ERROR: The Java picklist file was not found.
1 libname testdata spde './' hdfshost=default;
ERROR: tkhdjn1 constructNewObjectOfClass: failed.
ERROR: tkhdjn2 JnlFromException: Missing exception.
ERROR: Can't construct instance of class org.apache.hadoop.conf.Configuration.
ERROR: Probable classpath problem.
ERROR: Could not connect to HDFS.
ERROR: Libref TESTDATA is not assigned.
ERROR: Error in the LIBNAME statement.
Can someone please look into issue and exactly let me know what is problem.
My guess is that you're not providing the correct path in your libname statement. According to the documentation:
http://support.sas.com/documentation/cdl/en/engspdehdfsug/67403/HTML/default/viewer.htm#n1s4fhx0fko8zkn1fiinudodmmai.htm
You should have a fully qualified path and './' is not fully qualified.
If I was you, I'd focus on double checking all the requirements specified in the linked documentation.

Resources