Configuring flume to not generate .tmp files when sinking data to hdfs - hadoop

I am using flume to stream data into hdfs from server logs. But while data is getting streamed into the hdfs it is first creating .tmp file. Is there a way in the configuration where .tmp files can be hidden or there name can be changed by appending a . in front. My collection agent file look like-
## TARGET AGENT ##
## configuration file location: /etc/flume/conf
## START Agent: flume-ng agent -c conf -f /etc/flume/conf/flume-trg-agent.conf -n collector
#http://flume.apache.org/FlumeUserGuide.html#avro-source
collector.sources = AvroIn
collector.sources.AvroIn.type = avro
collector.sources.AvroIn.bind = 0.0.0.0
collector.sources.AvroIn.port = 4545
collector.sources.AvroIn.channels = mc1 mc2
## Channels ##
## Source writes to 2 channels, one for each sink
collector.channels = mc1 mc2
#http://flume.apache.org/FlumeUserGuide.html#memory-channel
collector.channels.mc1.type = memory
collector.channels.mc1.capacity = 100
collector.channels.mc2.type = memory
collector.channels.mc2.capacity = 100
## Sinks ##
collector.sinks = LocalOut HadoopOut
## Write copy to Local Filesystem
#http://flume.apache.org/FlumeUserGuide.html#file-roll-sink
#collector.sinks.LocalOut.type = file_roll
#collector.sinks.LocalOut.sink.directory = /var/log/flume
#collector.sinks.LocalOut.sink.rollInterval = 0
#collector.sinks.LocalOut.channel = mc1
## Write to HDFS
#http://flume.apache.org/FlumeUserGuide.html#hdfs-sink
collector.sinks.HadoopOut.type = hdfs
collector.sinks.HadoopOut.channel = mc2
collector.sinks.HadoopOut.hdfs.path = /user/root/flume-channel/%{log_type}
collector.sinks.k1.hdfs.filePrefix = events-
collector.sinks.HadoopOut.hdfs.fileType = DataStream
collector.sinks.HadoopOut.hdfs.writeFormat = Text
collector.sinks.HadoopOut.hdfs.rollSize = 1000000
Any help will be appreciated.

All files in Flume which are opened for writing can have .tmp extension by default. You can alter this with another extension. But we cannot avoid this extension. Moreover it is required to differentiate with closed files.
So it is better to some suffix like "." for open files. Flume HDFS Sink offers several parameters:
hdfs.inUsePrefix – Prefix that is used for temporal files that flume actively writes into
hdfs.inUseSuffix .tmp Suffix that is used for temporal files that flume actively writes into
hdfs.inUsePrefix = .
collector.sinks.HadoopOut.hdfs.inUsePrefix = .
hdfs.inUseSuffix = if it is blank it uses .tmp otherwise it uses specified suffix.

set hdfs.idleTimeout=x where x is a positive number

Related

Hadoop:copying csv file to hdfs using flume spool dir, Error: INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown

im trying to use flume spool dir to copy csv file to hdfs. as i'm beginner in Hadoop concepts. Please help me out in resolving the below issue
hdfs directory : /home/hdfs
flume dir : /etc/flume/
please find the flume-hwdgteam01.conf file as below
# Define a source, a channel, and a sink
hwdgteam01.sources = src1
hwdgteam01.channels = chan1
hwdgteam01.sinks = sink1
# Set the source type to Spooling Directory and set the directory
# location to /home/flume/ingestion/
hwdgteam01.sources.src1.type = spooldir
hwdgteam01.sources.src1.spoolDir = /home/hwdgteam01/nandan/input-data
hwdgteam01.sources.src1.basenameHeader = true
# Configure the channel as simple in-memory queue
hwdgteam01.channels.chan1.type = memory
# Define the HDFS sink and set its path to your target HDFS directory
hwdgteam01.sinks.sink1.type = hdfs
hwdgteam01.sinks.sink1.hdfs.path = /home/datalanding
hwdgteam01.sinks.sink1.hdfs.fileType = DataStream
# Disable rollover functionallity as we want to keep the original files
hwdgteam01.sinks.sink1.rollCount = 0
hwdgteam01.sinks.sink1.rollInterval = 0
hwdgteam01.sinks.sink1.rollSize = 0
hwdgteam01.sinks.sink1.idleTimeout = 0
# Set the files to their original name
hwdgteam01.sinks.sink1.hdfs.filePrefix = %{basename}
# Connect source and sink
hwdgteam01.sources.src1.channels = chan1
hwdgteam01.sinks.sink1.channel = chan1
following ways i executed the commands :
/usr/bin/flume-ng agent --conf conf --conf-file /home/hwdgteam01/nandan/config/flume-hwdgteam01.conf -Dflume.root.logger=DEBUG,console --name hwdgteam01
OR
/usr/bin/flume-ng agent -n hwdgteam01 -f /home/hwdgteam01/nandan/config/flume-hwdgteam01.conf
/usr/bin/flume-ng agent -n hwdgteam01 -f /home/hwdgteam01/nandan/config/flume-hwdgteam01.conf
OR
/home/hwdgteam01/nandan/config/flume-ng agent -n hwdgteam01 -f
OR
/home/hwdgteam01/nandan/config/flume-hwdgteam01.conf
but nothing worked out and m getting the following error flume error msg.
please let me know where im going wrong .
thanks for any help

flume taking time to copy data into hdfs when rolling based on file size

I have a usecase where i want to copy remote file into hdfs using flume. I also want that the copied files should align with the HDFS block size (128MB/256MB).Total size of remote data is 33GB.
I am using avro source and sink to copy remote data into hdfs. Similarly from sink side i am doing file size rolling(128,256).but for copying file from remote machine and storing it into hdfs(file size 128/256 MB) flume is taking an avg of 2 min.
Flume Configuration:
Avro Source(Remote Machine)
### Agent1 - Spooling Directory Source and File Channel, Avro Sink ###
# Name the components on this agent
Agent1.sources = spooldir-source
Agent1.channels = file-channel
Agent1.sinks = avro-sink
# Describe/configure Source
Agent1.sources.spooldir-source.type = spooldir
Agent1.sources.spooldir-source.spoolDir =/home/Benchmarking_Simulation/test
# Describe the sink
Agent1.sinks.avro-sink.type = avro
Agent1.sinks.avro-sink.hostname = xx.xx.xx.xx #IP Address destination machine
Agent1.sinks.avro-sink.port = 50000
#Use a channel which buffers events in file
Agent1.channels.file-channel.type = file
Agent1.channels.file-channel.checkpointDir = /home/Flume_CheckPoint_Dir/
Agent1.channels.file-channel.dataDirs = /home/Flume_Data_Dir/
Agent1.channels.file-channel.capacity = 10000000
Agent1.channels.file-channel.transactionCapacity=50000
# Bind the source and sink to the channel
Agent1.sources.spooldir-source.channels = file-channel
Agent1.sinks.avro-sink.channel = file-channel
Avro Sink(Machine where hdfs running)
### Agent1 - Avro Source and File Channel, Avro Sink ###
# Name the components on this agent
Agent1.sources = avro-source1
Agent1.channels = file-channel1
Agent1.sinks = hdfs-sink1
# Describe/configure Source
Agent1.sources.avro-source1.type = avro
Agent1.sources.avro-source1.bind = xx.xx.xx.xx
Agent1.sources.avro-source1.port = 50000
# Describe the sink
Agent1.sinks.hdfs-sink1.type = hdfs
Agent1.sinks.hdfs-sink1.hdfs.path =/user/Benchmarking_data/multiple_agent_parallel_1
Agent1.sinks.hdfs-sink1.hdfs.rollInterval = 0
Agent1.sinks.hdfs-sink1.hdfs.rollSize = 130023424
Agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
Agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
Agent1.sinks.hdfs-sink1.hdfs.batchSize = 50000
Agent1.sinks.hdfs-sink1.hdfs.txnEventMax = 40000
Agent1.sinks.hdfs-sink1.hdfs.threadsPoolSize=1000
Agent1.sinks.hdfs-sink1.hdfs.appendTimeout = 10000
Agent1.sinks.hdfs-sink1.hdfs.callTimeout = 200000
#Use a channel which buffers events in file
Agent1.channels.file-channel1.type = file
Agent1.channels.file-channel1.checkpointDir = /home/Flume_Check_Point_Dir
Agent1.channels.file-channel1.dataDirs = /home/Flume_Data_Dir
Agent1.channels.file-channel1.capacity = 100000000
Agent1.channels.file-channel1.transactionCapacity=100000
# Bind the source and sink to the channel
Agent1.sources.avro-source1.channels = file-channel1
Agent1.sinks.hdfs-sink1.channel = file-channel1
Network connectivity between both machine is 686 Mbps.
Can somebody please help me to identify whether something is wrong in the configuration or an alternate configuration so that the copying doesn't take so much of time.
Both agents use file channel. So before writing to HDFS, data has been written to disk twice. You can try to use a memory channel for each agent to see if the performance is improved.

Writing to flume using spool directory how to rename file

I am writing to hdfs using flume spool directory. Here is my code
#initialize agent's source, channel and sink
agent.sources = test
agent.channels = memoryChannel
agent.sinks = flumeHDFS
# Setting the source to spool directory where the file exists
agent.sources.test.type = spooldir
agent.sources.test.spoolDir = /johir
agent.sources.test.fileHeader = false
agent.sources.test.fileSuffix = .COMPLETED
# Setting the channel to memory
agent.channels.memoryChannel.type = memory
# Max number of events stored in the memory channel
agent.channels.memoryChannel.capacity = 10000
# agent.channels.memoryChannel.batchSize = 15000
agent.channels.memoryChannel.transactioncapacity = 1000000
# Setting the sink to HDFS
agent.sinks.flumeHDFS.type = hdfs
agent.sinks.flumeHDFS.hdfs.path =/user/root/
agent.sinks.flumeHDFS.hdfs.fileType = DataStream
# Write format can be text or writable
agent.sinks.flumeHDFS.hdfs.writeFormat = Text
# use a single csv file at a time
agent.sinks.flumeHDFS.hdfs.maxOpenFiles = 1
# rollover file based on maximum size of 10 MB
agent.sinks.flumeHDFS.hdfs.rollCount=0
agent.sinks.flumeHDFS.hdfs.rollInterval=0
agent.sinks.flumeHDFS.hdfs.rollSize = 1000000
agent.sinks.flumeHDFS.hdfs.batchSize =1000
# never rollover based on the number of events
agent.sinks.flumeHDFS.hdfs.rollCount = 0
# rollover file based on max time of 1 min
#agent.sinks.flumeHDFS.hdfs.rollInterval = 0
# agent.sinks.flumeHDFS.hdfs.idleTimeout = 600
# Connect source and sink with channel
agent.sources.test.channels = memoryChannel
agent.sinks.flumeHDFS.channel = memoryChannel
But he problem is data being written to the file is renamed to some a random tmp name. How can I rename the file in hdfs to my original file name in the source directory. For example I have the file day1.txt, day2.txt,day3.txt. Those are data for two different days. I want keep them stored in hdfs as day1.txt,day2.txt,day3.txt. But these three files are merged and stored in hdfs as FlumeData.1464629158164.tmp file. Is there any way to do this?
If you want to retain the original file name, you should attach the filename as a header to each event.
Set the basenameHeader property to true. This will create a header with the basename key unless set to something else using the basenameHeaderKey property.
Use the hdfs.filePrefix property to set the filename using basenameHeader values.
Add the below properties to your configuration file.
#source properties
agent.sources.test.basenameHeader = true
#sink properties
agent.sinks.flumeHDFS.type = hdfs
agent.sinks.flumeHDFS.hdfs.filePrefix = %{basename}

sink.hdfs writer adds garbage in my text file

I have successfully configured flume to transfer text files from a local folder to hdfs. My problem is when this file is transfered into hdfs, some unwanted text "hdfs.write.Longwriter + binary characters" are prefixed in my text file.
Here is my flume.conf
agent.sources = flumedump
agent.channels = memoryChannel
agent.sinks = flumeHDFS
agent.sources.flumedump.type = spooldir
agent.sources.flumedump.spoolDir = /opt/test/flume/flumedump/
agent.sources.flumedump.channels = memoryChannel
# Each sink's type must be defined
agent.sinks.flumeHDFS.type = hdfs
agent.sinks.flumeHDFS.hdfs.path = hdfs://bigdata.ibm.com:9000/user/vin
agent.sinks.flumeHDFS.fileType = DataStream
#Format to be written
agent.sinks.flumeHDFS.hdfs.writeFormat = Text
agent.sinks.flumeHDFS.hdfs.maxOpenFiles = 10
# rollover file based on maximum size of 10 MB
agent.sinks.flumeHDFS.hdfs.rollSize = 10485760
# never rollover based on the number of events
agent.sinks.flumeHDFS.hdfs.rollCount = 0
# rollover file based on max time of 1 mi
agent.sinks.flumeHDFS.hdfs.rollInterval = 60
#Specify the channel the sink should use
agent.sinks.flumeHDFS.channel = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
My source text file is very simple containing text :
Hi My name is Hadoop and this is file one.
The sink file I get in hdfs looks like this :
SEQ !org.apache.hadoop.io.LongWritable org.apache.hadoop.io.Text������5����>I <4 H�ǥ�+Hi My name is Hadoop and this is file one.
Please let me know what am i doing wrong?
Figured it out.
I had to fix this line
agent.sinks.flumeHDFS.fileType = DataStream
and change it to
agent.sinks.flumeHDFS.hdfs.fileType = DataStream
this fixed the issue.

Flume Tail a File

I am new to Flume-Ng and need help to tail a file. I have a cluster running hadoop with flume running remotely. I communicate to this cluster by using putty. I want to tail a file on my PC and put it on the HDFS in the cluster. I am using the following code to this.
#flume.conf: http source, hdfs sink
# Name the components on this agent
tier1.sources = r1
tier1.sinks = k1
tier1.channels = c1
# Describe/configure the source
tier1.sources.r1.type = exec
tier1.sources.r1.command = tail -F /(Path to file on my PC)
# Describe the sink
tier1.sinks.k1.type = hdfs
tier1.sinks.k1.hdfs.path = /user/ntimbadi/flume/
tier1.sinks.k1.hdfs.filePrefix = events-
tier1.sinks.k1.hdfs.round = true
tier1.sinks.k1.hdfs.roundValue = 10
tier1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
tier1.channels.c1.type = memory
tier1.channels.c1.capacity = 1000
tier1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
tier1.sources.r1.channels = c1
tier1.sinks.k1.channel = c1
I believe the mistake is in the source. This kind source does not take the host name or i.p to look for(in this case should be my PC). Could someone just give me a hint as to how to tail a file on my PC to upload it to the remotely located HDFS using flume.
The exec source in your configuration will run on the machine where you start the flume's tier1 agent. If you want to collect data from another machine, you'll need to start a flume agent on that machine too; to sum up you need:
an agent (remote1) running on the remote machine that has an avro source, which will listen for events from collector agents and will act like an aggregator.
an agent (local1) running on your machine (to act like a collector) that has an exec source and sends data to the remote agent via avro sink.
Or alternatively, you can have only one flume agent running on your local machine (having the same configuration you posted) and set the hdfs path as "hdfs://REMOTE_IP/hdfs/path" (though I'm not entirely sure this will work).
edit:
Below are the sample configurations for the 2-agents scenario (they may not work without some modification).
remote1.channels.mem-ch-1.type = memory
remote1.sources.avro-src-1.channels = mem-ch-1
remote1.sources.avro-src-1.type = avro
remote1.sources.avro-src-1.port = 10060
remote1.sources.avro-src-1.bind = 10.88.66.4 /* REPLACE WITH YOUR MACHINE'S EXTERNAL IP */
remote1.sinks.k1.channel = mem-ch-1
remote1.sinks.k1.type = hdfs
remote1.sinks.k1.hdfs.path = /user/ntimbadi/flume/
remote1.sinks.k1.hdfs.filePrefix = events-
remote1.sinks.k1.hdfs.round = true
remote1.sinks.k1.hdfs.roundValue = 10
remote1.sinks.k1.hdfs.roundUnit = minute
remote1.sources = avro-src-1
remote1.sinks = k1
remote1.channels = mem-ch-1
and
local1.channels.mem-ch-1.type = memory
local1.sources.exc-src-1.channels = mem-ch-1
local1.sources.exc-src-1.type = exec
local1.sources.exc-src-1.command = tail -F /(Path to file on my PC)
local1.sinks.avro-snk-1.channel = mem-ch-1
local1.sinks.avro-snk-1.type = avro
local1.sinks.avro-snk-1.hostname = 10.88.66.4 /* REPLACE WITH REMOTE IP */
local1.sinks.avro-snk-1.port = 10060
local1.sources = exc-src-1
local1.sinks = avro-snk-1
local1.channels = mem-ch-1

Resources