Loading data from HDFS to HBASE - hadoop

I'm using Apache hadoop 1.1.1 and Apache hbase 0.94.3.I wanted to load data from HDFS to HBASE.
I wrote pig script to serve the purpose. First i created hbase table in habse and next wrote pig script to load the data from HDFS to HBASE. But it is not loading the data into hbase table. Dont know where it's going worng.
Below is the command used to craete hbase table :
create table 'mydata','mycf'
Below is the pig script to load data from hdfs to hbase:
A = LOAD '/user/hduser/Dataparse/goodrec1.txt' USING PigStorage(',') as (c1:int, c2:chararray,c3:chararray,c4:int,c5:chararray);
STORE A INTO 'hbase://mydata'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'mycf:c1,mycf:c2,mycf:c3,mycf:c4,mycf:c5');
After excecuting the script it says
2014-04-29 16:01:06,367 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2014-04-29 16:01:06,376 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2014-04-29 16:01:06,382 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
1.1.1 0.12.0 hduser 2014-04-29 15:58:07 2014-04-29 16:01:06 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_201403142119_0084 A MAP_ONLY Message: Job failed! Error - JobCleanup Task Failure, Task: task_201403142119_0084_m_000001 hbase://mydata,
Input(s):
Failed to read data from "/user/hduser/Dataparse/goodrec1.txt"
Output(s):
Failed to produce result in "hbase://mydata"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201403142119_0084
2014-04-29 16:01:06,382 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
Please help where i'm going wrong ?

You have specified too many columns in the output to hbase. You have 5 input columns and 5 output columns, but HBaseStorage requires the first column to be the row key so there should only be 4 in the output

Related

Not able to insert data into hbase table using PIG

if i run ->
data = LOAD 'hdfs:/user/zzz/Pokemon.csv' USING PigStorage(',') AS (serial_no:int,name:chararray,type1:chararray,type2:chararray,total:int,hp:int,attack:int,defence:int,sp_attk:int,sp_def:int,speed:int);
data loaded successfully as i can see by dumping the data.
but after that when i run ->
STORE data INTO 'hbase://pokemons' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('cf:name,cf:type1,cf:type2,cf:total,cf:hp,cf:attack,cf:defence,cf:sp_attk,cf:sp_def,cf:speed');
then the problem arises you can check that below ->
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
3.2.1 0.17.0 zzz 2019-12-11 12:57:34 2019-12-11 12:57:43 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_1576044193401_0008 data MAP_ONLY Message: Job failed! hbase://pokemons,
Input(s):
Failed to read data from "hdfs:/user/zzz/Pokemon.csv"
Output(s):
Failed to produce result in "hbase://pokemons"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1576044193401_0008
2019-12-11 12:57:43,115 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
I'm not exactly sure what happened but I do know that current Pig is not tested against hadoop version 3 or higher. Tracked in https://issues.apache.org/jira/browse/PIG-5253

Pig on mapreduce mode is stuck on dumping hdfs data in Hortonworks HDP

I have some data files in my Hortonworks HDFS location. My requirement is to dump HDFS data in pig shell using pig-mapreduce mode. After loading the file data from HDFS, when trying to dump the data in pig shell using DUMP command, the map reduce job is getting stuck at 0% and not completing the job as well for a long time.
Followed the given below steps:
1) Start pig on mapreduce mode:
pig -x mapreduce
2) Load data into pig from a HDFS directory:
mapdata = load 'hdfs://ip-xxx-xx-xx-xx.us-east-2.compute.internal:8020/user/abc/datadir1' as (a:map[chararray]);
3) Print data:
dump mapdata;
After executing the 3rd step getting given below messages on the shell:
2018-10-09 07:25:51,099 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2018-10-09 07:25:51,099 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1539066382468_0147]

Sqoop import to HCatalog/Hive - table not visible

HDP-2.4.2.0-258 installed using Ambari 2.2.2.0
I have to import several SQL Server schema which should be accessible via Hive, Pig, MR and any third party(in future). I decided to import in HCatalog.
Sqoop provides ways to import to Hive OR HCatalog, I guess if I import to HCatalog, the same table will be accessible from Hive CLI, to MR and to Pig(please evaluate my assumption).
Questions :
If imported to Hive directly, will the table be available to Pig, MR
?
If imported to HCatalog, what needs to be done for accessing via Hive ?
Is pre-creation of tables in Hive necessary ? If yes, what is the advantage of the importing in HCatalog, (I can import in Hive directly)/(import in HDFS and then create external table) ?
Issue :
I wish to achieve the following in one step :
Import of the data(from SQL Server tables)
Avoid 'pre-creating' or writing create statements for those tables(there are 100s of them)
Store the table in ORC format
Store this data at custom HDFS path say /org/data/schema1, /org/data/schema2 and so on(is this impossible because Sqoop says it (--target-dir/--warehouse-dir)
I executed the following command :
-bash-4.2$ sqoop import --connect 'jdbc:sqlserver://<IP>;database=FleetManagement' --username --password --table SettingAttribute -- --schema Administration --hcatalog-home /usr/hdp/current/hive-webhcat --hcatalog-database default --hcatalog-table SettingAttribute --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile"
The source table contains 109 records and those are fetched :
16/08/10 15:02:27 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.4.2.0-258
16/08/10 15:02:27 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/08/10 15:02:28 INFO manager.SqlManager: Using default fetchSize of 1000
16/08/10 15:02:28 INFO manager.SQLServerManager: We will use schema Administration
16/08/10 15:02:28 INFO tool.CodeGenTool: Beginning code generation
16/08/10 15:02:28 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [Administration].[SettingAttribute] AS t WHERE 1=0
16/08/10 15:02:28 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.4.2.0-258/hadoop-mapreduce
Note: /tmp/sqoop-ojoqcu/compile/dfab14748c41a566ec286b7e4b11004d/SettingAttribute.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/08/10 15:02:30 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-ojoqcu/compile/dfab14748c41a566ec286b7e4b11004d/SettingAttribute.jar
16/08/10 15:02:30 INFO mapreduce.ImportJobBase: Beginning import of SettingAttribute
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/accumulo/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/10 15:02:31 INFO impl.TimelineClientImpl: Timeline service address: http://l4373t.sss.com:8188/ws/v1/timeline/
16/08/10 15:02:31 INFO client.RMProxy: Connecting to ResourceManager at l4283t.sss.com/138.106.9.80:8050
16/08/10 15:02:33 INFO db.DBInputFormat: Using read commited transaction isolation
16/08/10 15:02:33 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN([SettingAttributeId]), MAX([SettingAttributeId]) FROM [Administration].[SettingAttribute]
16/08/10 15:02:33 INFO mapreduce.JobSubmitter: number of splits:4
16/08/10 15:02:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467787344827_0013
16/08/10 15:02:34 INFO impl.YarnClientImpl: Submitted application application_1467787344827_0013
16/08/10 15:02:34 INFO mapreduce.Job: The url to track the job: http://l4283t.sss.com:8088/proxy/application_1467787344827_0013/
16/08/10 15:02:34 INFO mapreduce.Job: Running job: job_1467787344827_0013
16/08/10 15:02:41 INFO mapreduce.Job: Job job_1467787344827_0013 running in uber mode : false
16/08/10 15:02:41 INFO mapreduce.Job: map 0% reduce 0%
16/08/10 15:02:47 INFO mapreduce.Job: map 100% reduce 0%
16/08/10 15:02:48 INFO mapreduce.Job: Job job_1467787344827_0013 completed successfully
16/08/10 15:02:48 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=616636
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=540
HDFS: Number of bytes written=10079
HDFS: Number of read operations=16
HDFS: Number of large read operations=0
HDFS: Number of write operations=8
Job Counters
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=16132
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=16132
Total vcore-seconds taken by all map tasks=16132
Total megabyte-seconds taken by all map tasks=66076672
Map-Reduce Framework
Map input records=109
Map output records=109
Input split bytes=540
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=320
CPU time spent (ms)=6340
Physical memory (bytes) snapshot=999870464
Virtual memory (bytes) snapshot=21872697344
Total committed heap usage (bytes)=943194112
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=10079
16/08/10 15:02:48 INFO mapreduce.ImportJobBase: Transferred 9.8428 KB in 17.2115 seconds (585.597 bytes/sec)
16/08/10 15:02:48 INFO mapreduce.ImportJobBase: Retrieved 109 records.
The files are created under my user :
hdfs dfs -ls /user/ojoqcu/SettingAttribute
Found 5 items
-rw------- 3 ojoqcu hdfs 0 2016-08-10 15:02 /user/ojoqcu/SettingAttribute/_SUCCESS
-rw------- 3 ojoqcu hdfs 8378 2016-08-10 15:02 /user/ojoqcu/SettingAttribute/part-m-00000
-rw------- 3 ojoqcu hdfs 144 2016-08-10 15:02 /user/ojoqcu/SettingAttribute/part-m-00001
-rw------- 3 ojoqcu hdfs 1123 2016-08-10 15:02 /user/ojoqcu/SettingAttribute/part-m-00002
-rw------- 3 ojoqcu hdfs 434 2016-08-10 15:02 /user/ojoqcu/SettingAttribute/part-m-00003
I cannot see anything in HCatalog(nor in Hive)
-bash-4.2$ /usr/hdp/2.4.2.0-258/hive-hcatalog/bin/hcat -e "show tables in default;"
WARNING: Use "yarn jar" to launch YARN applications.
16/08/10 15:07:12 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
OK
Time taken: 2.007 seconds
Is there some authorization issue ?
I checked the var/log but none exist for Sqoop, Hive-Hcatalog and Hive, how can I view the authorization issue and fix it ?
Well, I'm unsure whether it was an authorization issue or a mere parsing problem or both. I did the following and it worked :
Did an su hive
Executed the following command(probably, the -- --schema should be
the last arg, Sqoop simply ignores/breaks after that!)
sqoop import --hcatalog-home /usr/hdp/current/hive-webhcat --hcatalog-database FleetManagement_Ape --hcatalog-table DatabaseLog --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile" --connect 'jdbc:sqlserver://<IP>;database=FleetManagement' --username --password --table DatabaseLog -- --schema ape

Transfer data to HBASE using Pig

I have pseudo distributed mode.. Versions are as follows...
kandabap#prakashl:~$ hadoop version
Hadoop 1.2.0
kandabap#prakashl:~$ pig version
Apache Pig version 0.12.0 (r1529718) compiled Oct 07 2013, 12:20:14
kandabap#prakashl:~$ hbase version
HBase 0.90.1-cdh3u0
I am trying to transfer data into HBASE using PIG script as follows...
PIG_CLASSPATH=/usr/lib/hbase/hbase-0.94.8/hbase-0.94.8.jar:/usr/lib/hbase/hbase-0.94.8/lib/zookeeper-3.4.5.jar /usr/lib/pig/pig-0.12.0/bin/pig /home/kandabap/Documents/H/HBASE/scripts.pig
But there seems to be some errors as,
Failed Jobs:
JobId Alias Feature Message Outputs
job_201405201031_0005 raw_data MAP_ONLY Message: Job failed! Error - JobCleanup Task Failure, Task: task_201405201031_0005_m_000001 hbase://sample_names,
Input(s):
Failed to read data from "/input.csv"
Output(s):
Failed to produce result in "hbase://sample_names"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201405201031_0005
2014-05-20 11:14:59,604 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2014-05-20 11:14:59,608 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message
Details at logfile: /usr/lib/hbase/hbase-0.94.8/pig_1400548430579.log

hadoop pig cannot mkdir java throw IO exception

I have a very simple script example from hadoop real world solution cookbook
and I try it on amazon cloudera clustertogov04 ami
and it gives me the java exception of not able to mkdir??
but I have enough disk space??
[ec2-user]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvde1 8255928 3307252 4529300 43% /
tmpfs 3757068 0 3757068 0% /dev/shm
/dev/xvdk 103212320 192116 97777324 1% /data
heres the script,command,error output
weblogs = load '/data2/weblogs/weblog_entries.txt' as
(md5:chararray,
url:chararray,
date:chararray,
time:chararray,
ip:chararray);
md5_grp = group weblogs by md5 parallel 4;
store md5_grp into '/data/weblogs/weblogs_md5_groups.bcp';
pig -x local -f pig02 2>err02
2013-06-20 19:57:29,499 [Thread-4] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 299132 bytes
2013-06-20 19:57:29,499 [Thread-4] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-06-20 19:57:29,519 [Thread-4] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
java.io.IOException: Mkdirs failed to create file:/data/weblogs/weblogs_md5_groups.bcp/_temporary/_attempt_local_0001_r_000000_0
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:434)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:420)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:805)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:685)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextOutputFormat.getRecordWriter(PigTextOutputFormat.java:98)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:582)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:433)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:309)
2013-06-20 19:57:33,176 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local_0001 has failed! Stop running all dependent jobs
2013-06-20 19:57:33,180 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-06-20 19:57:33,182 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2013-06-20 19:57:33,182 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Detected Local mode. Stats reported below may be incomplete
2013-06-20 19:57:33,185 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.0.0-cdh4.1.2 0.10.0-cdh4.1.2 ec2-user 2013-06-20 19:57:27 2013-06-20 19:57:33 GROUP_BY
Failed!
Pig Stack Trace ---------------
ERROR 2244: Job failed, hadoop does not return any error message
org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:193)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:430)
at org.apache.pig.Main.main(Main.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Looks like your Hadoop Job can't create the directory you've specified in your STORE
Have you tried storing the output to a different location such as your home directory?
Also FYI, Pig won't save its output to a file called "weblogs_md5_groups.bcp", it will actually be creating a directory with that name.

Resources