Reading data from csv file using pig - hadoop

I'm trying to read a csv file on pig shell on mac. All I'm doing is load a file into a variable and dump the variable. Here is how I'm doing it:
movies = LOAD '/user/myhome/movies_data.csv' USING PigStorage(',') as (id,name,year,rating,duration);
DUMP movies;
The data I'm using is downloaded from github provided here
This file is available in locally installed hdfs on my mac. When I do dump I get an error:
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias movies
at org.apache.pig.PigServer.openIterator(PigServer.java:935) at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66) at
org.apache.pig.Main.run(Main.java:565) at
org.apache.pig.Main.main(Main.java:177) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497) at
org.apache.hadoop.util.RunJar.run(RunJar.java:221) at
org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by:
java.io.IOException: Job terminated with anomalous status FAILED at
org.apache.pig.PigServer.openIterator(PigServer.java:927) ... 13 more
When I hit the app cluster link when this job is run, I get the following exception:
Diagnostics: Exception from container-launch.
Container id: container_1443887668938_0007_02_000001 Exit code: 127
Stack trace: ExitCodeException exitCode=127: at
org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at
org.apache.hadoop.util.Shell.run(Shell.java:455) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Container exited with a
non-zero exit code 127 Failing this attempt. Failing the application.
Pig version is 0.15.0 and hadoop is 2.6.1. Am I missing something here?

You could use CSVLoader from piggybank. Get the piggybank jar if not available and register it and use the CSVLoader. Something like this.
register '/your/path/to/piggybank/jar' ;
define CSVLoader org.apache.pig.piggybank.storage.CSVLoader();
movies = LOAD '/user/myhome/movies_data.csv' USING CSVLoader as (id,name,year,rating,duration);

Related

Hadoop Pig Latin always fails to Load Data

I'm making my first steps in Hadoop's Pig Latin,but I'm really blocked since i can't load any input data even if it exists
R = LOAD '/home/cloudera/Desktop/vol.csv' USING PigStorage(';')
AS
(AnneeVol:int,MoisVol:int,JourVol:int,NumVol:int,AeroDep:chararray,AeroArriv:chararray,DistVol:int);
File Location
upon running :
DUMP R;
I get this error
> Failed Jobs:
JobId Alias Feature Message Outputs
job_1590934825774_0010 R MAP_ONLY Message: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input path does not exist: hdfs://quickstart.cloudera:8020/home/cloudera/pig_lab/input/vol.csv
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:305)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:322)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://quickstart.cloudera:8020/home/cloudera/pig_lab/input/vol.csv
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
hdfs://quickstart.cloudera:8020/tmp/temp14790577/tmp132115611,
Input(s): Failed to read data from "/home/cloudera/pig_lab/input/vol.csv"
Output(s): Failed to produce result in "hdfs://quickstart.cloudera:8020/tmp/temp14790577/tmp132115611"
How do I fix this issue ?
Thank you !
The program try to find the vol.csv in the HDFS not on your local file system
Input path does not exist: hdfs://quickstart.cloudera:8020/home/cloudera/pig_lab/input/vol.csv
Please check core-site.xml for your default fileSystem. Currently it would have value hdfs://quickstart.cloudera:8020. Thats why it searches the file HDFS. You DON'T have to change anything there.
Just add file:// tag before the path to tell the program to find the vol.csv from your local fileSystem.
R = LOAD 'file:///home/cloudera/Desktop/vol.csv' USING PigStorage(';') AS (AnneeVol:int,MoisVol:int,JourVol:int,NumVol:int,AeroDep:chararray,AeroArriv:chararray,DistVol:int);
Ref: Cloudera blog
If that doesn’t help either put the file in HDFS and then refer that location in your code.
hdfs dfs -put /home/cloudera/Desktop/vol.csv hdfs://quickstart.cloudera:8020/user/<hdfs-user>/
Then in your code
R = LOAD '/user/<hdfs-user>/vol.csv' USING PigStorage(';') AS (AnneeVol:int,MoisVol:int,JourVol:int,NumVol:int,AeroDep:chararray,AeroArriv:chararray,DistVol:int);

PIG Unable to Read Local CSV Leading to Job Failure

Relatively new to the pig/hadoop ecosystem and encountering a frustrating issue when trying to execute a simple DUMP. I am trying to call the below pig script (the file is local, not HFDS, so I am opening the pig shell using pig -x local).
REGISTER utils.py USING jython AS utils;
events = LOAD '../test/events.csv' USING PigStorage(',') AS (patientid:int, eventid:chararray, eventdesc:chararray, timestamp:chararray, value:float);
events = FOREACH events GENERATE patientid, eventid, ToDate(timestamp, 'yyyy-MM-dd') AS etimestamp, value;
DUMP events;
However, when doing this, I receive the following error messages (failed job summary below, full PIG stack trace at bottom):
Input(s): Failed to read data from "file:///bootcamp/test/events.csv"
Output(s): Failed to produce result in "file/tmp/temp/305054006/tmp-908064458"
Pig Stack Trace:
ERROR 1066: Unable to open iterator for alias events. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias events. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.pig.PigServer.openIterator(PigServer.java:925)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:746)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:558)
at org.apache.pig.Main.main(Main.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
at org.apache.pig.PigServer.storeEx(PigServer.java:1034)
at org.apache.pig.PigServer.store(PigServer.java:997)
at org.apache.pig.PigServer.openIterator(PigServer.java:910)
... 13 more
Caused by: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.hadoop.mapreduce.Job.ensureState(Job.java:294)
at org.apache.hadoop.mapreduce.Job.getTaskReports(Job.java:540)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getTaskReports(HadoopShims.java:235)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:801)
...20 more
I have seen similar issues in regards to failed jobs, but sadly I haven't managed to hunt down a resolution as of yet.
EDIT: I should mention that when following the PIG tutorial at the below link, I was encountering the same issue.
http://www.sunlab.org/teaching/cse8803/fall2016/lab/hadoop-pig/
So, I found I was able to "DUMP" the file by doing the following:
tmp = events 100000; --any int larger than number of rows
dump tmp;
I had seen a similar issue on here, and was able to resolve by running as root.

Apache Pig Error: java.lang.reflect.InvocationTargetException

I have an error when I try to load or save my data from Apache Pig into anything but CSV. Here is my pig code:
REGISTER /usr/local/Cellar/pig/0.15.0/libexec/*.jar
REGISTER /usr/local/Cellar/pig/0.15.0/libexec/lib/*.jar
REGISTER /usr/local/Cellar/hbase/1.1.2/libexec/lib/*.jar
REGISTER /usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/*.jar
REGISTER /usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/*.jar
REGISTER /usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/mapreduce/*.jar
REGISTER /usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/mapreduce/lib/*.jar
REGISTER /usr/local/Cellar/pig/0.15.0/libexec/lib/piggybank.jar
--generated_data = LOAD 'tableHive' USING org.apache.hive.hcatalog.pig.HCatLoader(',') AS (level:chararray, score:INT, attraction:chararray);
generated_data = LOAD 'CSVResults/ireland.csv' USING PigStorage(',') AS (level:chararray, score:INT, attraction:chararray);
DUMP generated_data;
fiveRating = FILTER generated_data BY (float)score>4;
level6 = FILTER fiveRating BY (float)level>5;
groupedbylevel = group level6 by attraction;
countAttractions = FOREACH groupedbylevel {
level6Attractions = CROSS level6.level;
generate group, COUNT(level6Attractions) AS listBylevel6;
};
orderlist = ORDER countAttractions BY listBylevel6 DESC;
limitorder = LIMIT orderlist 20;
STORE limitorder into 'Level6AttractionsIreland-limited2' using PigStorage(',');
STORE countAttractions into 'hbase://Level6AttractionsIreland' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('Ireland:level Ireland:score Ireland:attraction');
STORE countAttractions INTO 'Level6AttractionsIreland' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(',');
and here is the error from the pig log file:
Pig Stack Trace
---------------
ERROR 2999: Unexpected internal error. java.io.IOException: java.lang.reflect.InvocationTargetException
java.lang.RuntimeException: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:211)
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.getOutputFormat(HBaseStorage.java:928)
at org.apache.pig.newplan.logical.visitor.InputOutputFileValidatorVisitor.visit(InputOutputFileValidatorVisitor.java:69)
at org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.walk(DepthFirstWalker.java:53)
at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
at org.apache.pig.newplan.logical.relational.LogicalPlan.validate(LogicalPlan.java:212)
at org.apache.pig.PigServer$Graph.compile(PigServer.java:1767)
at org.apache.pig.PigServer$Graph.access$300(PigServer.java:1443)
at org.apache.pig.PigServer.execute(PigServer.java:1356)
at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:631)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:459)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:436)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:317)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:198)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:160)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:206)
... 25 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:457)
... 30 more
Caused by: java.lang.NoClassDefFoundError: org/cloudera/htrace/Trace
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:218)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:481)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:907)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:701)
... 35 more
Caused by: java.lang.ClassNotFoundException: org.cloudera.htrace.Trace
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 41 more
================================================================================
As you can see I've tried to add every jar file that might be relevant, and I've also removed everything but the load and store commands to see if it is the code that is causing it, but I get the same results. I am very new to Pig so apologies if this is a silly mistake, I have searched for answers elsewhere, but nothing is working for me at this stage. Also I am on a mac with Hadoop, HBase and Hive installed locally and I am running the command 'pig -x local test.pig' in terminal. Any advice would be great, thanks!
I'm not sure where you grabbed your dependencies from, but your code is looking for some Cloudera package that doesn't exist, thus the error.
Caused by: java.lang.ClassNotFoundException: org.cloudera.htrace.Trace
I just downloaded Hbase with brew install hbase, and the correct class file is org.apache.htrace.Trace located in /usr/local/Cellar/hbase/.../htrace-core-*.jar
So, I would recommend downloading the latest version of the HBase libraries.

Hadoop 2.7.1 - Map reduce error: Diagnostics: Exception from container-launch

I just upgraded hadoop from 2.6.0 to 2.7.1 and all my mapreduces that work against hbase-1.1.1 started to fail.
The error I get in the resource manager is:
Diagnostics: Exception from container-launch.
Container id: container_e08_1439909765014_0004_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
And looking more into the logs I find some FileNotFoundException:
java.io.FileNotFoundException: ${yarn.nodemanager.log-dirs}/application_1439909765014_0004/container_e08_1439909765014_0004_02_000001 (Is a directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.hadoop.yarn.ContainerLogAppender.activateOptions(ContainerLogAppender.java:55)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.service.AbstractService.<clinit>(AbstractService.java:43)
Has somebody encountered this problem when migrating to the newest hadoop version?
I've seen something similar in https://issues.apache.org/jira/browse/YARN-1473, but perhaps it's some known difference or issue between versions that I'm not seeing.
Thanks.
There was a misconfiguration.
The clients were not updated and were trying to run jobs with the previous version of hadoop (2.6.0). When I upgraded the clients to work with 2.7.1 and submit jobs through code that way, they worked well.

Unable to open iterator for alias -Pig

I am trying to load an XML file using XMLLoader(Piggybank) in Pig, but I get an error saying "unable to open iterator for alias B".
I have written the following code:
REGISTER /home/hdfs/spig/trunk/contrib/piggybank/java/piggybank.jar
A = LOAD '/core-site.xml'using org.apache.pig.piggybank.storage.XMLLoader('property') as (x:chararray);
B = foreach A GENERATE FLATTEN(REGEX_EXTRACT_ALL(x,'<property>\\s*<name>(.*) </name>\\s*<value>(.*)</value>\\s*<description>(.*)</description>\\s*</property>'));
dump B;
The following is the log file:
Pig Stack Trace
ERROR 1066: Unable to open iterator for alias A
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias A at
org.apache.pig.PigServer.openIterator(PigServer.java:935) at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81) at
org.apache.pig.Main.run(Main.java:631) at
org.apache.pig.Main.main(Main.java:177) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.hadoop.util.RunJar.run(RunJar.java:221) at
org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by:
java.io.IOException: Job terminated with anomalous status FAILED at
org.apache.pig.PigServer.openIterator(PigServer.java:927) ... 13 more
Looks like one of your jobs might have failed.
Caused by: java.io.IOException: Job terminated with anomalous status FAILED

Resources