What is this Error means?
" Error in metadata: org.apache.thrift.transport.TTransportException? "
In what are all the cases this error come?
I am getting this error while creating tables and while loading the data into the table.
org.apache.thrift.transport.TTransportException, Its a very generic error that message describing that the hiveserver is having a problem and suggesting you to take a look at the Hive logs. If you can able to access the full log stack and share the exact details might get the real cause of this problem. Most of the times I faced this error was like Issues with hive metadata, unable to access hive metadata,dir permissions issues,concurrency related issues, hiveserver port related problems.
You can give a try restarting and recreating your tables. or setting up hive port before starting the server might help you.
$export HIVE_PORT=10000
$hive --service hiveserver
There might be other reasons too but we can look out there once we get full log stack.
Related
I am thus far unable to successfully run cassandra. I have arrived at a point in my efforts where I believe it is more efficient to reach out for help.
Installation method: datastax-ddc-64bit-3.9.0.msi
OS: Windows 7
Symptoms:
cmd> net start DataStax-DDC-Server
results in cmd output 'service is starting' and 'service was started successfully'.
datastax_ddc_server-stdout.log has this subsequent output, that is likely relevant:
WARN 10:38:17 Seed provider couldn't lookup host *myIPaddress*
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: The seed provider lists no seeds.
ERROR 10:38:17 Exception encountered during startup: The seed provider lists no seeds.
$
cmd> nodetool status
results in the following error message:
Error: Failed to connect to ‘127.0.0.1:7199’: Connection refused
I would also like to note that the Cassandra CQL Shell closes immediately after I open it.. I think that it quickly flashes an error similar to above.
Please be patient with me if I have included some useless information or am not approaching my issue from the correct perspective. I have not worked with apache cassandra before, nor have I configured a machine to facilitate an installation of any database engine.
Any help/insight is much appreciated,
Thanks!
I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.
I am trying to run Cloudera-Manager and it's giving me error given in following screenshots and marked with red pen.
Can anybody help me resolve those error ??
The error is quite straightforward. Cloudera Manager can't connect to the database with credentials specified. Are you able to connect manually with credentials provided in /etc/cloudera-scm-server/db.properties?
Seems like its trying to find Driver class for Mysql DB (and not Postgres). I can see below error in your snapshot:
JDBC Driver Class not found: com.mysql.jdbc.driver
I am quite new to BAM and one of my hive queries is broken.
However I can't find what's wrong since the only error it gives me is
ERROR: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
I've looked around and found out that BAM is only capable of displaying that much information and for more I need to look in hadoop's job tracker. However I can't find any info on how to turn it on or access it in the BAM server.
So how do I access it/ turn it on ?
Please do not mislead with the exception. Most probably this seems to be a problem with Hive query. To get a proper idea about the problem you should send the backend console print log.
It seems like the problem is most probably with your hive query and not with hadoop job tracker. To make sure, please run of the samples[1] and check whether hive queries are executing properly. If hive queries executing without a problem and summarized results are displayed in dashboards, the problem could be with your hive query.
[1] - http://docs.wso2.org/display/BAM240/HTTPD+Logs+Analysis+Sample
I encountered an error while adding a new service (Service Type = HDFS) using Cloudera Manager (Free Edition). The error message says as follows:
Could not create process: com.cloudera.cmf.service.config.ConfigFileSpec$GenerateException: Unable to process template:couldn't find the template hadoop/topology.py.vm
I checked /var/log/cloudera-scm-server/cloudera-scm-server.log and found a line like below.
org.apache.velocity.exception.ResourceNotFoundException: Unable to find resource '/WEB-INF/templates/hadoop/topology.py.vm'
I guess that a certain war file does not contain hadoop-metrics.properties.vm (Velocity template file?) although it should do and that this might be related to WHIRR-370.
Could you help me to solve this problem, please?
May I ask which version of Cloudera Manager is being used? Does this error occurred just after you try to add add the service of after some time when the service is added?
Based on the error, it seems some of the configuration is missing that why service addition failed. So I would like to know how did you install Hadoop on this cluster?
If you download the virtual machine and compare from your installation, you can compare the folder for completeness and missing content. It does work for me always.