HortonWorks version is 2.4
SpringXd version - 1.3.1
I am trying to run a sqoop job for import from Spring XD. It is giving following stack trace. Can you let me know what is the issue here.
2016-05-16T11:29:54+0530 1.3.1.RELEASE INFO main tool.CodeGenTool - Beginning code generation
2016-05-16T11:29:54+0530 1.3.1.RELEASE INFO main manager.SqlManager - Executing SQL statement: SELECT t.* FROM Events AS t WHERE 1=0
2016-05-16T11:29:54+0530 1.3.1.RELEASE INFO main manager.SqlManager - Executing SQL statement: SELECT t.* FROM Events AS t WHERE 1=0
2016-05-16T11:29:54+0530 1.3.1.RELEASE INFO main orm.CompilationManager - HADOOP_MAPRED_HOME is /Users/nxjain/Documents/spring-xd-1.3.1.RELEASE/xd/lib/hdp22
2016-05-16T11:29:56+0530 1.3.1.RELEASE INFO main orm.CompilationManager - Writing jar file: /tmp/sqoop-nxjain/compile/855df3a42e72c92f2a5d87ffc3827e48/Events.jar
2016-05-16T11:29:56+0530 1.3.1.RELEASE INFO main mapreduce.ImportJobBase - Beginning import of Events
2016-05-16T11:29:57+0530 1.3.1.RELEASE WARN main util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-05-16T11:29:57+0530 1.3.1.RELEASE INFO main manager.SqlManager - Executing SQL statement: SELECT t.* FROM Events AS t WHERE 1=0
2016-05-16T11:29:57+0530 1.3.1.RELEASE WARN main mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies.
2016-05-16T11:29:57+0530 1.3.1.RELEASE INFO main client.RMProxy - Connecting to ResourceManager at /172.16.128.128:8050
2016-05-16T11:29:58+0530 1.3.1.RELEASE INFO main db.DBInputFormat - Using read commited transaction isolation
2016-05-16T11:29:58+0530 1.3.1.RELEASE INFO main mapreduce.JobSubmitter - number of splits:1
2016-05-16T11:29:59+0530 1.3.1.RELEASE INFO main mapreduce.JobSubmitter - Submitting tokens for job: job_1463375311370_0002
2016-05-16T11:29:59+0530 1.3.1.RELEASE INFO main impl.YarnClientImpl - Submitted application application_1463375311370_0002
2016-05-16T11:29:59+0530 1.3.1.RELEASE INFO main mapreduce.Job - The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1463375311370_0002/
2016-05-16T11:29:59+0530 1.3.1.RELEASE INFO main mapreduce.Job - Running job: job_1463375311370_0002
2016-05-16T11:30:03+0530 1.3.1.RELEASE INFO main mapreduce.Job - Job job_1463375311370_0002 running in uber mode : false
2016-05-16T11:30:03+0530 1.3.1.RELEASE INFO main mapreduce.Job - map 0% reduce 0%
2016-05-16T11:30:03+0530 1.3.1.RELEASE INFO main mapreduce.Job - Job job_1463375311370_0002 failed with state FAILED due to: Application application_1463375311370_0002 failed 2 times due to AM Container for appattempt_1463375311370_0002_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://sandbox.hortonworks.com:8088/cluster/app/application_1463375311370_0002Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e09_1463375311370_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2016-05-16T11:30:03+0530 1.3.1.RELEASE INFO main mapreduce.Job - Counters: 0
2016-05-16T11:30:03+0530 1.3.1.RELEASE WARN main mapreduce.Counters - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-05-16T11:30:03+0530 1.3.1.RELEASE INFO main mapreduce.ImportJobBase - Transferred 0 bytes in 5.7269 seconds (0 bytes/sec)
2016-05-16T11:30:03+0530 1.3.1.RELEASE WARN main mapreduce.Counters - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-05-16T11:30:03+0530 1.3.1.RELEASE INFO main mapreduce.ImportJobBase - Retrieved 0 records.
2016-05-16T11:30:03+0530 1.3.1.RELEASE ERROR main tool.ImportTool - Error during import: Import job failed!
This is most likely due to a classpath issue - you could check the logs for the YARN container that failed.
Take a look at this comment: https://github.com/spring-projects/spring-xd/pull/1451#issue-56883941
There we added some specific config settings for the HDP version being used. You would have to adjust based on the version and build you are using. Just be aware that we haven't tested XD with HDP 2.4. It might work though, so let us know how you make out.
Related
Im trying to do bulkload in HBase but below exception is coming while loading the data...
Application application_1439213972129_0080 initialization failed (exitCode=255) with output: Requested user root is not whitelisted and has id 0,which is below the minimum allowed 500
Failing this attempt. Failing the application.
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,personal:Name,Profession:Position_Title,Profession:Department,personal:Employee_Annual_Salary -Dimporttsv.separator=',' /tables/emp_salary_new1 /mapr/MapRDev/apps/Datasets/Employee_Details.csv
2015-08-13 18:24:33,076 INFO [main] mapreduce.TableMapReduceUtil: Setting speculative execution off for bulkload operation
2015-08-13 18:24:33,123 INFO [main] mapreduce.TableMapReduceUtil: Configured 'hbase.mapreduce.mapr.tablepath' to /tables/emp_salary_new1
2015-08-13 18:24:33,220 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2015-08-13 18:24:33,372 INFO [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2015-08-13 18:24:33,735 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2015-08-13 18:24:33,770 INFO [main] mapreduce.TableOutputFormat: Created table instance for /tables/emp_salary_new1
2015-08-13 18:24:34,252 INFO [main] input.FileInputFormat: Total input paths to process : 1
2015-08-13 18:24:34,294 INFO [main] mapreduce.JobSubmitter: number of splits:1
2015-08-13 18:24:34,535 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1439213972129_0055
2015-08-13 18:24:34,792 INFO [main] security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager
2015-08-13 18:24:35,031 INFO [main] impl.YarnClientImpl: Submitted application application_1439213972129_0055
2015-08-13 18:24:35,114 INFO [main] mapreduce.Job: The url to track the job: http://hadoop-c02n02.ss.sw.ericsson.se:8088/proxy/application_1439213972129_0055/
2015-08-13 18:24:35,115 INFO [main] mapreduce.Job: Running job: job_1439213972129_0055
2015-08-13 18:24:53,253 INFO [main] mapreduce.Job: Job job_1439213972129_0055 running in uber mode : false
2015-08-13 18:24:53,256 INFO [main] mapreduce.Job: map 0% reduce 0%
2015-08-13 18:24:53,281 INFO [main] mapreduce.Job: Job job_1439213972129_0055 failed with state FAILED due to: Application application_1439213972129_0055 failed 2 times due to AM Container for appattempt_1439213972129_0055_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://hadoop-c02n02.ss.sw.ericsson.se:8088/cluster/app/application_1439213972129_0055Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e02_1439213972129_0055_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:304)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:354)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:87)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Shell output: main : command provided 1
main : user is mapradm
main : requested yarn user is mapradm
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2015-08-13 18:24:53,320 INFO [main] mapreduce.Job: Counters: 0
Looks like you are loading data in MapR DB not in hbase.But its fine hbase commands are compatible with MarDB. I just a small change in your command and see if that works for you.
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,personal:Name,Profession:Position_Title,Profession:Department,personal:Employee_Annual_Salary '-Dimporttsv.separator=,' /tables/emp_salary_new1 /mapr/MapRDev/apps/Datasets/Employee_Details.csv
I'm trying to run some hadoop program to extracting keywords of some abstracts in Ubuntu. When I run my program using Hadoop, I get the following error.
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
INFO input.FileInputFormat: Total input paths to process : 1
INFO mapreduce.JobSubmitter: number of splits:1
INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1404812840999_0001
INFO impl.YarnClientImpl: Submitted application application_1404812840999_0001
INFO mapreduce.Job: The url to track the job: http://shiva-VirtualBox:8088/proxy/application_1404812840999_0001/
INFO mapreduce.Job: Running job: job_1404812840999_0001
INFO mapreduce.Job: Job job_1404812840999_0001 running in uber mode : false
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: Job job_1404812840999_0001 failed with state FAILED due to: Application application_1404812840999_0001 failed 2 times due to AM Container for appattempt_1404812840999_0001_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
14/07/08 14:21:44 INFO mapreduce.Job: Counters: 0
What's the cause of this error?
Note that I converted my mapreduce project to maven project for using lucene library in my code.
Is your resource manager really on the /0.0.0.0:8032? It also seams you are not using Toolrunner, so try to rewrite your mapreduce Hadoop: Implementing the Tool interface for MapReduce driver.
Hope it helps
Number of thread increased, JVM memory and CPU is fully utilised. Please increase the JVM size and increase memory limit of Mapper and reducer task.
conf.set("mapreduce.map.memory.mb", "4096");
conf.set("mapreduce.map.java.opts", "-Xmx3500m");
I am trying to process data with elephantbird in pig but I don't succeed in loading the data. Here is my pig script:
register 'lib/elephant-bird-core-3.0.9.jar';
register 'lib/elephant-bird-pig-3.0.9.jar';
register 'lib/google-collections-1.0.jar';
register 'lib/json-simple-1.1.jar';
twitter = LOAD 'statuses.log.2013-04-01-00'
USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad');
DUMP twitter;
The output I get is
[main] INFO org.apache.pig.Main - Apache Pig version 0.11.0-cdh4.3.0 (rexported) compiled May 27 2013, 20:48:21
[main] INFO org.apache.pig.Main - Logging error messages to: /home/hadoop1/twitter_test/pig_1374834826168.log
[main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/hadoop1/.pigbootup not found
[main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://master.hadoop:8020
[main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: master.hadoop:8021
[main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
[main] WARN org.apache.pig.backend.hadoop23.PigJobControl - falling back to default JobControl (not using hadoop 0.23 ?)
java.lang.NoSuchFieldException: jobsInProgress
at java.lang.Class.getDeclaredField(Class.java:1938)
at org.apache.pig.backend.hadoop23.PigJobControl.<clinit>(PigJobControl.java:58)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.newJobControl(HadoopShims.java:102)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:285)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:177)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1266)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1251)
at org.apache.pig.PigServer.storeEx(PigServer.java:933)
at org.apache.pig.PigServer.store(PigServer.java:900)
at org.apache.pig.PigServer.openIterator(PigServer.java:813)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:696)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:320)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:604)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
[main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=656085089
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job6015425922938886053.jar
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job6015425922938886053.jar created
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
[JobControl] WARN org.apache.hadoop.mapred.JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
[JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
[JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 5
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201307261031_0050
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases twitter
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: twitter[10,10] C: R:
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://master.hadoop:50030/jobdetails.jsp?jobid=job_201307261031_0050
[main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201307261031_0050 has failed! Stop running all dependent jobs
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
[main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: Error: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected
[main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
[main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.0.0-cdh4.3.0 0.11.0-cdh4.3.0 hadoop1 2013-07-26 12:33:48 2013-07-26 12:34:23 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_201307261031_0050 twitter MAP_ONLY Message: Job failed! hdfs://master.hadoop:8020/tmp/temp971280905/tmp1376631504,
Input(s):
Failed to read data from "hdfs://master.hadoop:8020/user/hadoop1/statuses.log.2013-04-01-00"
Output(s):
Failed to produce result in "hdfs://master.hadoop:8020/tmp/temp971280905/tmp1376631504"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201307261031_0050
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
[main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Unable to recreate exception from backed error: Error: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected
Details at logfile: /home/hadoop1/twitter_test/pig_1374834826168.log
The file exists and is accessible:
$ hdfs dfs -ls /user/hadoop1/statuses.log.2013-04-01-00
Found 1 items
-rw-r--r-- 3 hadoop1 supergroup 656085089 2013-07-26 11:53 /user/hadoop1/statuses.log.2013-04-01-00
This seems to be a general problem with the pig version shipped with Cloudera 4.6.0: the problem seems to be the line that says
[main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: Error: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected
I got a similar error when running another user defined function for loading data:
[main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
When I force pig to local mode (''-x local'') I get the more obvious error
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
So the version of Hadoop pig uses seems to be incompatible with the one shipped with Cloudera, I guess.
This is indeed a versioning problem: some libraries are not yet compatible with the new MapReduce API, see for example the issues #56, #247 and #308.
For ElephantBird the issue is solved in a recent version. Using ElephantBird 4.1 in the above code and adding the Hadoop compatibility module
register 'lib/elephant-bird-core-4.1.jar';
register 'lib/elephant-bird-pig-4.1.jar';
register 'lib/elephant-bird-hadoop-compat-4.1.jar';
register 'lib/google-collections-1.0.jar';
register 'lib/json-simple-1.1.jar';
solved the problem! :-)
I try to perform a simple join in apache pig. The datasets that I use are from http://www.dtic.upf.edu/~ocelma/MusicRecommendationDataset/lastfm-1K.html
This is what I do in the pig shell:
profiles = LOAD '/user/hadoop/tests/userid-profile.tsv' AS (id,gender,age,country, dreg);
songs = LOAD '/user/hadoop/tests/userid-timestamp-artid-artname-traid-traname.tsv' AS (userID, timestamp, artistID, artistName, trackID, trackName);
prDACH = filter profiles by country=='Germany' or country=='Austria' or country=='Switzerland';
songsDACH = join songs by userID, prDACH by id;
dump songsDACH;
This is a part of the log:
2013-04-20 01:01:33,885 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2013-04-20 01:02:39,802 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 2% complete
2013-04-20 01:13:23,943 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 37% complete
2013-04-20 01:14:48,704 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 39% complete
2013-04-20 01:15:40,166 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 41% complete
2013-04-20 01:15:41,142 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2013-04-20 01:15:41,143 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1366403809583_0023 has failed! Stop running all dependent jobs
2013-04-20 01:15:41,143 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-04-20 01:15:43,117 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: AttemptID:attempt_1366403809583_0023_m_000019_0 Info:Container killed by the ApplicationMaster.
When I use a small sample of the songs then the join is performed without any problem.
Any ideas?
It looks like it is a problem on the hdfs settings, since I can perform the join using a subset of the songs data (100000 samples).
PS I am using the cloudera demo vm.
You should have a look at the task attempt's log: point your browser at the job tracker (http://[your-jobtracker-node]:50030), look for the failed job, find a failed task attempt, browse through the log and you'll be able to see the actual exception - I suspect that it may have something to do with task heap size configuration, but you'll have to look at the exception first and then come up with a solution (configuration change, etc.).
I'm trying to run a MapReduce Job with Pig and Cassandra and I always get the error:
ERROR 2118: Unable to create input splits for: cassandra://constellation/logs
[SOLVED]
There were some environment variables I missed to set:
PIG_RPC_PORT, PIG_INITIAL_ADDRESS,
PIG_PARTITIONER
/opt/cassandra-0.7.0-beta3/contrib/pig$ bin/pig_cassandra example-script.pig
10/11/15 17:38:26 INFO pig.Main: Logging error messages to: /opt/cassandra-0.7.0-beta3/contrib/pig/pig_1289839106859.log
2010-11-15 17:38:27,809 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://hadoop-master-1.dkd.lan:8020
2010-11-15 17:38:29,756 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: hadoop-master-1.dkd.lan:8021
2010-11-15 17:38:32,753 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - (Name: Store(hdfs://hadoop-master-1.dkd.lan/tmp/temp657556636/tmp-375431593:org.apache.pig.builtin.BinStorage) - 1-82 Operator Key: 1-82)
2010-11-15 17:38:32,960 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.CombinerOptimizer - Choosing to move algebraic foreach to combiner
2010-11-15 17:38:33,100 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 3
2010-11-15 17:38:33,100 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 3
2010-11-15 17:38:33,364 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2010-11-15 17:38:38,771 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2010-11-15 17:38:38,999 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2010-11-15 17:38:39,055 [Thread-4] WARN org.apache.hadoop.mapred.JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
2010-11-15 17:38:39,500 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2010-11-15 17:38:40,340 [Thread-4] INFO org.apache.hadoop.mapred.JobClient - Cleaning up the staging area hdfs://hadoop-master-1.dkd.lan/var/lib/hadoop-0.20/cache/mapred/mapred/staging/dkd-sprenger/.staging/job_201011101636_0011
2010-11-15 17:38:40,356 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2010-11-15 17:38:40,357 [main] ERROR org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map reduce job(s) failed!
2010-11-15 17:38:40,402 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2010-11-15 17:38:40,517 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Unable to recreate exception from backend error: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: cassandra://constellation/logs
Details at logfile: /opt/cassandra-0.7.0-beta3/contrib/pig/pig_1289839106859.log
Anyone who has an idea -> SOLVED
There were some environment variables I missed to set them.
enviroment:
Ubuntu Server 10.4
Versions:
hadoop: 0.20
pig: 0.7
cassandra: 0.7.0 beta3
The asker already updated the question to include the answer:
[SOLVED] There were some environment variables I missed to set:
PIG_RPC_PORT, PIG_INITIAL_ADDRESS, PIG_PARTITIONER