IIS Logs Straming to Hadoop real time - hadoop

I am trying to do a POC in Hadoop for log aggregation. we have multiple IIS servers hosting atleast 100 sites. I want to to stream logs continously to HDFS and parse data and store in Hive for further analytics.
1) Is Apache KAFKA correct choice or Apache Flume
2) After streaming is it better to use Apache storm and ingest data into Hive
Please help with any suggestions and also any information of this kind of problem statement.
Thanks

You can use either Kafka or flume also you can combine both to get data into HDFSbut you need to write code for this There are Opensource data flow management tools available, you don't need to write code. Eg. NiFi and Streamsets
You don't need to use any separate ingestion tools, you can directly use those data flow tools to put data into hive table. Once table is created in hive then you can do your analytics by providing queries.
Let me know you need anything else on this.

Related

Newbie: Hadoop IIS Logs - Reasonable approach?

I am a totaly beginner at the topic hadoop - so sorry if this is a stupid question.
My fictional scenario is, that I have several webserver (IIS) with several log locations. I want to centralize this log files and based on the data I want to analyze the health of the applications and the webservers.
Since the eco system of hadoop overs a variety of tools I am not sure if my solution is a valid one.
So I thought that I move the log files to hdfs, create an external table on the directory and an internal table and copy the data via hive (insert into ...select from) from the external table to internal table (with some filtering because of the comment lines beginning with #)
When the data is stored within the internal table I delete the previous moved files from hdfs.
Technical it works, I tried it already - but is this is reasonable aproach?
And if yes - how would I automatize this steps since now I did all the stuff manually via Ambari.
THanks for your input
BW
Yes, this is perfectly fine approach.
Outside of setting up the Hive table ahead of time, what's the left to automate?
You want to run things on a schedule? Use Oozie, Luigi, Airflow, or Azkaban.
Ingesting logs from other Windows servers because you have a highly available web service? Use Puppet, for example, to configure your log collections agents (not Hadoop related)
Note, if it's only log file collection that you care about, I would probably have used Elasticsearch instead of Hadoop to store data, Filebeat to continuously watch log files, Logstash to apply per-message level filtering, and Kibana to do visualizations. If combining Elasticsearch for fast indexing/searching and Hadoop for archival, you can insert Kafka between the log message ingestion and message writers/consumers

Best way to automatate getting data from Csv files to Datalake

I need to get data from csv files ( daily extraction from différent business Databasses ) to HDFS then move it to Hbase and finaly charging agregation of this data to a datamart (sqlServer ).
I would like to know the best way to automate this process ( using java or hadoops tools )
I'd echo the comment above re. Kafka Connect, which is part of Apache Kafka. With this you just use configuration files to stream from your sources, you can use KSQL to create derived/enriched/aggregated streams, and then stream these to HDFS/Elastic/HBase/JDBC/etc etc etc
There's a list of Kafka Connect connectors here.
This blog series walks through the basics:
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Little to no coding required? In no particular order
Talend Open Studio
Streamsets Data Collector
Apache Nifi
Assuming you can setup a Kafka cluster, you can try Kafka Connect
If you want to program something, probably Spark. Otherwise, pick your favorite language. Schedule the job via Oozie
If you don't need the raw HDFS data, you can load directly into HBase

splunk replacement with flume or kafka

I need your help with one suggestion. In current scenario, we have one application on cloud and via splunk we have the ability to view log. I am thinking of implementing this using our big data tools like flume/kafka wherein I can take real time log data from cloud ( currently taken by splunk ) and made it available to our HDFS. Few concern here
is this feasible and make sense ?
for log search (same capability like splunk )
which tool can we use?
If you just want to move logs into HDFS, you can use Flume with HDFS sink.
There are also few other options available like -
Logstash
You can use other framework like Elasticsearch and Kibana to have more functionality available for the logs.

How to Stream Data To an EMR Cluster

I appreciate ideas on how to stream data from an On-Premise Windows server to a persistent EMR cluster?
Some Background
I would like to run a persistent cluster running a MR job much like the WordCount examples that are available. I would like to stream text from a local Windows Server up to the cluster and have it processed by the running job.
All of the streaming WordCount examples I have reviewed always start with a static text file in S3 and don't cover how to implement anything to generate the stream.
Does this need to be treated in two parts?
Get the data first into S3
Stream it into the EMR cluster?
I have seen tools like Logstash which tend to run agents on the local server which tail the end of a weblog and transfer it.
As you can probably tell, I'm a Windows guy, stretching into EMR and by association Linux. Feel free to let me know if there is some way cool command line tool that already does this.
Thanks in advance.
Currently EMR as-is only supports MR, Hive, Pig, HBase and Impala. MR/Hive/Pig process the data in a batch oriented fashion and data can't be streamed to them. While HBase is a NoSQL DB and Impala is used for interactive ad-hoc queries.
For processing streaming data there are a lot of other options like Storm, Samza, S4. From AWS there is Kinesis which has been moved into GA recently.
Yes a static file would go into S3 and then be the input into your EMR cluster job.
But I believe that fact you want a persistent cluster implies you are streaming in continuation from your Windows server. Is that the case ?
If so you need to create a AWS Kinesis Stream, configure your producers which put data into the stream's shards by calling the Putrecord.
Start by reading "Developing Record Consumer Applications"
I think you could use apache Flume (https://flume.apache.org/)
Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.

Anyone using DynamoDB and Hive without using EMR?

I was reading the below integration of using Hive for querying data on DynamoDB.
http://aws.typepad.com/aws/2012/01/aws-howto-using-amazon-elastic-mapreduce-with-dynamodb.html
But as per that link, Hive needs to be setup on top of EMR. But I wanted to know if I can use this integration with the standalone Hadoop cluster I already have instead of using EMR. Has anyone done this? Will there be sync issues between data in DynamoDB and HDFS happen compared to using EMR?
To be able to use it on your own cluster, you would need the custom StorageHandler for DynamoDB(it probably involves a custom SerDe as well).
It seems to be no available at the moment, at least not at AWS website.
What you can do is use the JDBC interface, provided by Amazon, to produce the queries from your cluster, but it would still be executed on top of EMR.

Resources