is there a way to process data in a sql table column before ingesting it to hbase using sqoop - hadoop

data needs to be ingested from sql table to hbase using sqoop.i have xml data in one column. instead of ingesting the complete xml for each row, i want to required details from xml and then ingest it with rest of the columns. is there a way like writing UDF where xml column is passed and output is used along with other sql columns to ingest.

No but you can extend the Java class PutTransformer (https://sqoop.apache.org/docs/1.4.4/SqoopDevGuide.html), add your XML transformation logic there, and pass the custom JAR file to the sqoop command.

Related

Read data from multiple tables at a time and combine the data based where clause using Nifi

I have scenario where I need to extract multiple database table data including schema and combine(combination data) them and then write to xl file?
In NiFi the general strategy to read in from a something like a fact table with ExecuteSQL or some other SQL processor, then using LookupRecord to enrich the data with a lookup table. The thing in NiFi is that you can only do a table at a time, so you'd need one LookupRecord for each enrichment table. You could then write to a CSV file that you could open in Excel. There might be some extensions elsewhere that can write directly to Excel but I'm not aware of any in the standard NiFi distro.

How can I process Avro Container data with different versions of schema?

I have months' worth of data from a single domain stored in HDFS in Avro Container files. Each file has the schema for all the data in that file, of course. How do I process all the data using Hive or Pig? It seems both Hive and Pig need the avsc file of some form of table structure definition up front. i.e. even if I use Avro tools to extract avsc from each file I will have to load each dataset using a different avsc file and I cannot process all of them using one job or DDL + Query.
Isn't it possible for Hive and Pig to pull the avsc at runtime based on the Avro Container file it is processing? Is it already implemented and I'm not finding it or too difficult to implement?

How to query file in hdfs which has xml as one column

Context:
I have data in a table in mysql with xml as one column.
For Ex: Table application has 3 fields.
id(integer) , details(xml) , address(text)
(In real case i have 10-12 fields here).
Now we want to query the whole table with all the fields in mysql table using pig.
Transferred the data from mysql into hdfs using sqoop with
record delimiter '\u0005' and column delimiter as "`" to /x.xml.
Then Load the data from x.xml into the Pig using
app = LOAD '/x.xml' USING PigStorage('\u0005') AS (id:int , details:chararray , address:chararray);
What is the best way to query such data.
Solution that i could currently think about.
Use a custom loader and extend Loadfunc to read the data.
If there is some way to load a particular column using xmlpathloader and rest loading normally. Please suggest if this can be done.
As all the examples i have seen using xpath are using XML loader while loading the file.
For Ex:
A = LOAD 'xmls/hadoop_books.xml' using org.apache.pig.piggybank.storage.XMLLoader('BOOK') as (x:chararray);
Is it good to use pig for querying such kind of data, please suggest if there are any other alternative technologies, that does it effectively.
The size of data present is around 500 GB.
FYI i am new to hadoop ecosytem and i might be missing something trivial.
Load a specific column:
Some other StackOverflow answers suggesting preprocessing the data with awk (generate a new input contains only the xml part.)
A nicer work-a-round to generate the specific data with an extra FOREACH from the xml column, like:
B = FOREACH app GENERATE details;
and store it to be able to load with an XML loader.
Check the StreamingXMLLoader
(You can also check Apache Drill it may support this case out of the box)
Or use UDF for the XML processing and in pig you just hand over the related xml field.

Creating HCatalog Schema from File

I have a csv file that containes about a thousand different columns I want to make this into a Hive table and a HCatalog schema without typing each field individually.. Is this possible? If so could someone point me in the right direction thanks.
I would prefer going with a java program for this problem. Please make sure the CSV file header is the required columns for hive or hcatalog table... Java comes with CSVReader which can read headers of the CSV file.add these headers to an array list. Either make use of JDbc or Webhcat to create tables on hive and hcatalog. For jdbc iterate through the list while adding columns yo the table. Same approach can be followed for webhcat also.

Is there a way to access avro data stored in hbase using hive to do analysis

My Hbase table has rows that contain both serialized avro (put there using havrobase) and string data. I know that Hive table can be mapped to avro data stored in hdfs to do data analysis but I was wondering if anyone has tried to map hive to hbase table(s) that contains avro data. Basically I need to be able to query both avro and non avro data stored in Hbase, do some analysis and store the result in a different hbase table. I need the capability to do this as a batch job as well. I don't want to write a JAVA MapReduce job to do this because we have constantly changing configurations and we need to use a scripted approach. Any suggestions? Thanks in advance!
You can write an HBase co-processor to expose the avro record as regular HBase qualifiers. You can see an implementation of that in Intel's panthera-dot

Resources