We have fixed length format file in S3. We want to create Athena Table after converting it to Parque. We have around 50-60 different such files
Currently I could think of two Approach:
Put fixed length parsing logic in Athena Table creation script.
Create Glue job which will parse it and create Parque files then
create Athena table on that
Approach-1:
Though, it may have minimal code, but this will be in create table script. We are using Teraform to create
Infra, so parsing logic(Regex or Grok pattern) would be part of infra, I am skeptical to put logic in infra code.
Approach-2:
This will be Glue job written using Spark, it will be flexible to parse fixed length file, we could write reusable code for fixed length format to use for all different files. Logic to parse would be with developers. Athena will have external table on Glue job's output location.Infra code would contain only create statement.
Could you please provide your views.
My recommendation would be to go with Approach #2. Using spark's readfile method, you can read most of fix length format files and convert them to parquet. You may do validations or quick transformation before saving in parquet if needed.
Related
I'm currently working on DataStage IBM and here's my problem:
I have to get a n numbers of datasets that's going to be in a folder and I have to append them in one DataSet (.ds).
Since I don't know how many datasets I will have and neither they full name, I can't use a DataStage job to deal with them. All I know is they will have the same metadata (because they will be generated in the same job).
I think I have to use a Shell Cmd to append them but I'm not a UNIX guy.
Thank you for everyone who reads so far.
You can use the same job. Specify Append mode (rather than Override) for the target Data Set; each time you run the job data will be added to the same Data Set. Be careful not to inadvertently create duplicates by processing the same source data twice. Use parameters to specify the source.
I have some question regarding the effective way of reading values in DB and generating report.
I use hadoop to see data from multiple tables and do data analysis based on the results.
I want to know if there is effective tool or way which can read data from multiple tables and evaluate if the values of certain columns are same across tables and send report if they are not same... I have 2 options, either I can read data from hadoop or I can connect to DB in DB2 and do it. Without creating a new java program, is there a tool which helps for the same? Like Talend tool which reads XML and writes output in DB ?
You can use Talend for this. Using Talend, you can read data from Hadoop as well as from database. In between you can perform your operation after fetching data and generate report.
if your using alot of data, and do this sort of function alot elasticsearch is also a great help in this area. use ELK stack. although you would not need the 'L' logstash part of this necessarily
I have huge amount of json files, >100TB size in total, each json file is 10GB bzipped, and each line contain a json object, and they are stored on s3
If I want to transform the json into csv (also stored on s3) so I can import them into redshift directly, is writing custom code using hadoop the only choice?
Would it be possible to do adhoc query on the json file without transform the data into other format (since I don't want to convert them into other format first every time I need to do query as the source is growing)
The quickest and easiest way would be to launch an EMR cluster loaded with Hive to do the heavy lifting for this. By using the JsonSerde, you can easily transform the data into csv format. This would only require you to do a insert the data into a CSV formatted table from the JSON formatted table.
A good tutorial for handling the JsonSerde can be found here:
http://aws.amazon.com/articles/2855
Also a good library used for CSV format is:
https://github.com/ogrodnek/csv-serde
The EMR cluster can be short-lived and only necessary for that one job, which can also span across low cost spot instances.
Once you have the CSV format, the Redshift COPY documentation should suffice.
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
I am using Amazon EMR Hadoop Hive for big data processing. Current data in my log files is in CSV format. In order to make the table from log files, I wrote regex expression to parse the data and store into different columns of external table. I know that SerDe can be used to read data in JSON format and this means that each log file line could be as JSON object. Are there any Hadoop performance advantages if my log files are in JSON format comparing CSV format.
If you can process the output of the table (that you created with the regexp) why do another processing? Try to avoid unnecessary stuff.
I think the main issue here is which format is faster to read. I believe CSV will provide better speed over JSON but don't take my word. Hadoop really doesn't care. It's all byte arrays to him, once in memory.
Our environment is heavy into storing data in hive. I find myself currently working on something that it outside the scope though. I have a mapreduce written, but it requires a lot of direct user inputs for information that could easily be scraped from Hive. That said, when I query hive for extended table data, all of the extended information is thrown out in 1 or 2 columns as a giant blob of almost-JSON. Is there either a convenient way to parse this information, or better yet, get it directly in a more direct manor?
Alternatively, if I could get pointed to documentation on manually using the CombinedHiveInputFormat, that would simplify my code a lot more. But it seems like that InputFormat is solely used inside of Hive, using it's custom structs.
Ultimately, what I want is to know table names, columns (not including partitions), and partition locations for the split a mapper is working on. If there is yet another way to accomplish this, I am eager to know.