How to load data from local machine to hdfs using flume - hadoop

i am new to flume so please tell me...how to store log files from my local machine to local my HDFS using flume
i have issues in setting classpath and flume.conf file
Thank you,
ajay

agent.sources = weblog
agent.channels = memoryChannel
agent.sinks = mycluster
## Sources #########################################################
agent.sources.weblog.type = exec
agent.sources.weblog.command = tail -F REPLACE-WITH-PATH2-your.log-FILE
agent.sources.weblog.batchSize = 1
agent.sources.weblog.channels =
REPLACE-WITH-
CHANNEL-NAME
## Channels ########################################################
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100 agent.channels.memoryChannel.transactionCapacity = 100
## Sinks ###########################################################
agent.sinks.mycluster.type =REPLACE-WITH-CLUSTER-TYPE
agent.sinks.mycluster.hdfs.path=/user/root/flumedata
agent.sinks.mycluster.channel =REPLACE-WITH-CHANNEL-NAME
Save this file as logagent.conf and run with below command
# flume-ng agent –n agent –f logagent.conf &

We do need more information to know why things are working for you.
The short answer is that you need a Source to read your data from (maybe the spooling directory source), a Channel (memory channel if you don't need reliable storage) and the HDFS sink.
Update
The OP reports receiving the error message, "you must include conf file in flume class path".
You need to provide the conf file as an argument. You do so with the --conf-file parameter. For example, the command line I use in development is:
bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name castellan-indexer --conf /etc/flume-ng/conf
The error message reads that way because the bin/flume-ng script adds the contents of the --conf-file argument to the classpath before running Flume.

If you are appending data to your local file, you can use an exec source with "tail -F" command. If the file is static, use cat command to transfer the data to hadoop.
The overall architecture would be:
Source: Exec source reading data from your file
Channel : Either memory channel or file channel
Sink: Hdfs sink where data is being dumped.
Use user guide to create your conf file (https://flume.apache.org/FlumeUserGuide.html)
Once you have your conf file ready, you can run it like this:
bin/flume-ng agent -n $agent_name -c conf -f conf/your-flume-conf.conf

Related

How to write Wireshark data to HBase with Flume

I am trying to put real time wireshark data into HBase with Flume using following agent configurations.
Source
A1.Source.k1.type= exec
A1.Source.k1.command = tail -f /usr/sbin/tshark
Sink
A1.Sinks.C1.Type = hbase
A1.Sinks.C1.columnFamily =
A1.Sinks.C1.table =
And I use tshark in root
Tshark -i eth0
Data seems to be stored but it looks like this - x0/x0/x0/.
Any idea where I am wrong

How to extract all the collected tweets in a single file

I'm using Flume to collect tweets and store them on HDFS.
The collecting part is working fine, and I can find all my tweets in my file system.
Now I would like to extract all these tweets in one single file.
The problem is that the different tweets are stored as follow :
As we can see, the tweets are stored inside blocks of 128 MB but only use a few Ko, which is a normal behaviour for HDFS correct me if I'm wrong.
However how could I get all the different tweets on one file ?
Here is my conf file that I run with the follwing command :
flume-ng agent -n TwitterAgent -f ./my-flume-files/twitter-stream-tvseries.conf
twitter-stream-tvseries.conf :
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type =
org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.consumerKey=hidden
TwitterAgent.sources.Twitter.consumerSecret=hidden
TwitterAgent.sources.Twitter.accessToken=hidden
TwitterAgent.sources.Twitter.accessTokenSecret=hidden
TwitterAgent.sources.Twitter.keywords=GoT, GameofThrones
TwitterAgent.sources.Twitter.keywords=GoT, GameofThrones
TwitterAgent.sinks.HDFS.channel=MemChannel
TwitterAgent.sinks.HDFS.type=hdfs
TwitterAgent.sinks.HDFS.hdfs.path=hdfs://ip-addressl:8020/user/root/data/twitter/tvseries/tweets
TwitterAgent.sinks.HDFS.hdfs.fileType=DataStream
TwitterAgent.sinks.HDFS.hdfs.writeformat=Text
TwitterAgent.sinks.HDFS.hdfs.batchSize=1000
TwitterAgent.sinks.HDFS.hdfs.rollSize=0
TwitterAgent.sinks.HDFS.hdfs.rollCount=10000
TwitterAgent.sinks.HDFS.hdfs.rollInterval=600
TwitterAgent.channels.MemChannel.type=memory
TwitterAgent.channels.MemChannel.capacity=10000
TwitterAgent.channels.MemChannel.transactionCapacity=1000
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.HDFS.channel = MemChannel
You can configure the HDFS sink to produce a message by time, event or size. So, if you want to save multiple messages till 120MB limit is reached, set
hdfs.rollInterval = 0 # This is to create new file based on time
hdfs.rollSize = 125829120 # This is to create new file based on size
hdfs.rollCount = 0 # This is to create new file based on events (different tweets in your case)
You can use the following commands to concatenate the files into single file:
find . -type f -name 'FlumeData*' -exec cat {} + >> output.file
or if you want to store the data into Hive tables for later analysis, create an external table and consume it into Hive DB.

Flume-ng: source path and type for copying log file from local to HDFS

I am trying to copy some log files from local to HDFS using flume-ng. The source is /home/cloudera/flume/weblogs/ and the sink is hdfs://localhost:8020/flume/dump/. A cron job will copy the logs from tomcat server to /home/cloudera/flume/weblogs/ and I want to log files to be copied to HDFS as the files are available in /home/cloudera/flume/weblogs/ using flume-ng. Below is the conf file I created:
agent1.sources= local
agent1.channels= MemChannel
agent1.sinks=HDFS
agent1.sources.local.type = ???
agent1.sources.local.channels=MemChannel
agent1.sinks.HDFS.channel=MemChannel
agent1.sinks.HDFS.type=hdfs
agent1.sinks.HDFS.hdfs.path=hdfs://localhost:8020/flume/dump/
agent1.sinks.HDFS.hdfs.fileType=DataStream
agent1.sinks.HDFS.hdfs.writeformat=Text
agent1.sinks.HDFS.hdfs.batchSize=1000
agent1.sinks.HDFS.hdfs.rollSize=0
agent1.sinks.HDFS.hdfs.rollCount=10000
agent1.sinks.HDFS.hdfs.rollInterval=600
agent1.channels.MemChannel.type=memory
agent1.channels.MemChannel.capacity=10000
agent1.channels.MemChannel.transactionCapacity=100
I am not able to understand:
1) what will be the value of agent1.sources.local.type = ???
2) where to mention the source path /home/cloudera/flume/weblogs/ in the above conf file ?
3) Is there anything I am missing in the above conf file?
Please let me know on these.
You can use either :
An Exec Source and use a command (i.e. cat or tail on gnu/linux on you files)
Or a Spooling Directory Source for read all files in a directory

Spark - on EMR saveAsTextFile wont write data to local dir

Running Spark on EMR (AMI 3.8). When trying to write an RDD to a local file, I am getting no results on the name/master node.
On my previous EMR cluster (same version of Spark installed with bootstrap script instead of as an add-on to EMR), the data would write to the local dir on the name node. Now I can see it appearing in "/home/hadoop/test/_temporary/0/task*" directories on the other nodes in the cluster, but only the 'SUCCESS' file on the master node.
How can I get the file to write to the name/master node only?
Here is an example of the command I am using:
myRDD.saveAsTextFile("file:///home/hadoop/test")
I can do this in a round about way using by pushing to HDFS first then writing the results to local filesystem with shell commands. But I would love to hear if others have a more elegant approach.
//rdd to local text file
def rddToFile(rdd: RDD[_], filePath: String) = {
//setting up bash commands
val createFileStr = "hadoop fs -cat " + filePath + "/part* > " + filePath
val removeDirStr = "hadoop fs -rm -r " + filePath
//rm dir in case exists
Process(Seq("bash", "-c", removeDirStr)) !
//save data to HDFS
rdd.saveAsTextFile(filePath)
//write data to local file
Process(Seq("bash", "-c", createFileStr)) !
//rm HDFS dir
Process(Seq("bash", "-c", removeDirStr)) !
}

I want to read a log file in flume which is on the different server

I want to read a log file from different server in flume which is up and running on some different server.......so for doing so how can I achive this by changing my flume-conf.properties file.......what should i write in the configuration file of flume to achieve this...
a1.sources = AspectJ
a1.channels = memoryChannel
a1.sinks = kafkaSink
a1.sources.AspectJ.type = com.flume.MySource
a1.sources.AspectJ.command = tail -F /tmp/data/Log.txt
for achiving this what should I write in place of
a1.sources.AspectJ.command = tail -F /tmp/data/Log.txt
I believe what you want to ask is that, if Flume is setup on host 'F' and your log files exists on host 'L', how will you configure flume to read log files from host 'L', correct ?
If so, then you need to setup Flume on host 'L' and not on 'F'. Setup flume on the same host where the log files are and setup the Sink to point to Kafka topic.

Resources