I am quite new to BAM and one of my hive queries is broken.
However I can't find what's wrong since the only error it gives me is
ERROR: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
I've looked around and found out that BAM is only capable of displaying that much information and for more I need to look in hadoop's job tracker. However I can't find any info on how to turn it on or access it in the BAM server.
So how do I access it/ turn it on ?
Please do not mislead with the exception. Most probably this seems to be a problem with Hive query. To get a proper idea about the problem you should send the backend console print log.
It seems like the problem is most probably with your hive query and not with hadoop job tracker. To make sure, please run of the samples[1] and check whether hive queries are executing properly. If hive queries executing without a problem and summarized results are displayed in dashboards, the problem could be with your hive query.
[1] - http://docs.wso2.org/display/BAM240/HTTPD+Logs+Analysis+Sample
Related
log is showing like command took 49.03 minutes ,i can see status of job as "succeeded" but data is not loaded.
Kindly help me out with possible assumptions.
In the log file, please check for the number of records processed and loaded in each step. Have a look in the tables used in the join.
Perform more detailed analysis by, executing the steps manually and analyse the outcome.
Thanks.
I'm using infosphere DataStage of IBM for loading data in ETL processes.
I have some problem with one of my jobs.
This job scheduler twice a month, and when it run automatic by the tool - it get an oracle error:
ORA-00813 : object no longer exists
But when we run it manualy after it failed - there is no error at all and it's finished fine.
I tried to run the query in oracle directly and it just fine.
That problem happend twice, and always after the failure - it run good with manual execution.
any idea?
Thanks.
I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.
What is this Error means?
" Error in metadata: org.apache.thrift.transport.TTransportException? "
In what are all the cases this error come?
I am getting this error while creating tables and while loading the data into the table.
org.apache.thrift.transport.TTransportException, Its a very generic error that message describing that the hiveserver is having a problem and suggesting you to take a look at the Hive logs. If you can able to access the full log stack and share the exact details might get the real cause of this problem. Most of the times I faced this error was like Issues with hive metadata, unable to access hive metadata,dir permissions issues,concurrency related issues, hiveserver port related problems.
You can give a try restarting and recreating your tables. or setting up hive port before starting the server might help you.
$export HIVE_PORT=10000
$hive --service hiveserver
There might be other reasons too but we can look out there once we get full log stack.
I'm writing my first Avro job that is meant to take an avro file and output text. I tried to reverse engineer it from this example:
https://gist.github.com/chriswhite199/6755242
I am getting the error below though.
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
I looked around and found it was likely an issue with what jar files are being used. I'm running CDH4 with MR1 and am using the jar files below:
avro-tools-1.7.5.jar
hadoop-core-2.0.0-mr1-cdh4.4.0.jar
hadoop-mapreduce-client-core-2.0.2-alpha.jar
I can't post code for security reasons but it shouldn't need anything not used in the example code. I don't have maven set up yet either so I can't follow those routes. Is there something else I can try to get around these issues?
Try using avro 1.7.3
AVRO-1170 bug