Increasing the file size in flume with memory channel - hadoop

Below is my flume config file. Even after the changing the rollInterval and rollSize only 10 events is getting written also the console shows rollCount=10 and events=10. Also I tried increasing the rollCount to 1000 but no change in output. Can anyone suggest to increase the file size being written in hdfs. Whats wrong with the below conf file?
#naming components
NetAgent.sources = NetCat_1 NetCat_2
NetAgent.sinks = HDFS
NetAgent.channels = MemChannel
NetAgent.sources.NetCat_1.type = netcat
NetAgent.sources.NetCat_1.bind = localhost
NetAgent.sources.NetCat_1.port = 8671
NetAgent.sources.NetCat_2.type = netcat
NetAgent.sources.NetCat_2.bind = localhost
NetAgent.sources.NetCat_2.port = 8672
NetAgent.sinks.HDFS.type = hdfs
NetAgent.sinks.HDFS.hdfs.path = file path here
NetAgent.sinks.HDFS.hdfs.filePrefix = test
NetAgent.sinks.HDFS.hdfs.rollSize = 67108864
NetAgent.sinks.HDFS.hdfs.rollInterval = 3600
NetAgent.sinks.HDFS.rollCount = 0
NetAgent.sinks.HDFS.hdfs.batchSize = 10000
NetAgent.sinks.HDFS.hdfs.writeFormat = Text
NetAgent.sinks.HDFS.hdfs.fileType = DataStream
NetAgent.channels.MemChannel.type = memory
NetAgent.channels.MemChannel.capacity = 20000
NetAgent.channels.MemChannel.transactionCapacity = 20000
NetAgent.sources.NetCat_1.channels = MemChannel
NetAgent.sources.NetCat_2.channels = MemChannel
NetAgent.sinks.HDFS.channel = MemChannel
The console logs as
(SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUg-org.apache.flume.sink.hdfs.BucketWriter.shouldRotate(BucketWriter.java)]
rolling: rollCount: 10, events: 10
the image shows the files written in HDFS

You forgot to add hdfs to your rollCount configuration. It is using the default value of 10 because it doesn't see your configuration. Notice that your config for HDFS is:
NetAgent.sinks.HDFS.type = hdfs
NetAgent.sinks.HDFS.hdfs.rollSize = 67108864
NetAgent.sinks.HDFS.hdfs.rollInterval = 3600
NetAgent.sinks.HDFS.rollCount = 0
NetAgent.sinks.HDFS.hdfs.batchSize = 10000
NetAgent.sinks.HDFS.hdfs.writeFormat = Text
NetAgent.sinks.HDFS.hdfs.fileType = DataStream
In the rollCount line, it needs to be:
NetAgent.sinks.HDFS.hdfs.rollCount = 0
This will override the default rollCount and your Flume agent will behave how you want it to.

Related

FlumeData file not getting created in HDFS sink

I am trying to ingest real time data using Kafka as source and flume as sink.Sink type is HDFS. My producer is working fine,i can see the data being produced and my agent is running fine(no error while running the command) but the file is not getting generated in specified directory.
Command for Starting flume agent:
/usr/hdp/2.5.0.0-1245/flume/bin/flume-ng agent -c /usr/hdp/2.5.0.0-1245/flume/conf -f /usr/hdp/2.5.0.0-1245/flume/conf/flume-hdfs.conf -n tier1
And my flume-hdfs.conf file:
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.zookeeperConnect = localhost:2181
tier1.sources.source1.topic = data_1
tier1.sources.source1.channels = channel1
tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
tier1.channels.channel1.brokerList = localhost:6667
tier1.channels.channel1.zookeeperConnect = localhost:2181
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.sinks.sink1.type = hdfs
tier1.sinks.sink1.hdfs.path = /user/user_name/FLUME_LOGS/
tier1.sinks.sink1.hdfs.rollInterval = 5
tier1.sinks.sink1.hdfs.rollSize = 0
tier1.sinks.sink1.hdfs.rollCount = 0
tier1.sinks.sink1.hdfs.fileType = DataStream
tier1.sinks.sink1.channel = channel1
I am not able to find out what is wrong with the execution.
Please suggest how to overcome this problem.
Set the path of HDFS sink this way:
tier1.sinks.sink1.hdfs.path = "VALUE of fs.default.name, located in core-site.xml"/user/user_name/FLUME_LOGS/
For example
tier1.sinks.sink1.hdfs.path = hdfs://localhost:54310/user/user_name/FLUME_LOGS/

how to make flume load files to hdfs, hdfs never close file .tmp and rename file by name.

Actually I have 2 questions, my first question is : How to make HDFS close file (example .123456789.tmp ) after the entire file was flushed by flume agent.
In fact, the file never closed until, I force flume agent to stop.
I beleive there is a method using the 4 parameters as follow:
hdfs.rollSize = 0
hdfs.rollCount =0
hdfs.rollInterval = 0
hdfs.batchsize = 1000000
Well, my second question is, my agent flume receives files from SFTP server, while I need to keep each file name in hdfs. It works fine with spooldir type, but not with SFTP !! is there a any ideas ?
My configuration file for flume agent as follow:
agent.sources = r1
agent.channels = c1
agent.sinks = k
configure ftp source
agent.sources.r1.type = org.keedio.flume.source.mra.source.Source
agent.sources.r1.client.source = sftp
agent.sources.r1.name.server = ip
agent.sources.r1.user = user
agent.sources.r1.password = secret
agent.sources.r1.port = 22
agent.sources.r1.knownHosts = ~/.ssh/known_hosts
agent.sources.r1.work.dir = /DATA/test/flumrFTP
agent.sources.r1.fileHeader = true
agent.sources.r1.basenameHeader = true
agent.sources.r1.inputCharset = ISO-8859-1
#agent.sources.r1.batchSize = 1000
agent.sources.r1.flushlines = true
configure sink s1
agent.sinks.k.type = hdfs
agent.sinks.k.hdfs.path = hdfs://hostname:8000/user/admin/DATA/import_flume/
agent.sinks.k.hdfs.filePrefix = %{basename}
agent.sinks.k.hdfs.rollCount = 0
agent.sinks.k.hdfs.rollInterval = 0
agent.sinks.k.hdfs.rollSize = 0
agent.sinks.k.hdfs.useLocalTimeStamp = true
agent.sinks.k.hdfs.batchsize = 1000000
agent.sinks.k.hdfs.fileType = DataStream
Use a channel which buffers events in memory
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000000
agent.channels.c1.transactionCapacity = 1000000
agent.sources.r1.channels = c1
agent.sinks.k.channel = c1
Try setting the variable
hdfs.rollInterval It's the number of seconds to wait before rolling current file
This setting closes the file after the number of seconds you set. I set mine at 200 seconds and I am loading smaller files

Flume adding line feed after 2048 characters in a row

I have a Flume 1.5 agent running on a Ubuntu workstation that collects logs from various devices and re-formats the logs into a comma delimited file with very long rows. After the collection and re-reformatting of the logs they are placed into a spool directory where the Flume Agent sends the log file to a Hadoop server running a Flume agent to accept the log file and place them in a HDFS directory.
Everything works fine except that when Flume sends the file to HDFS directory there are Line Feeds after every 2048 characters in each row.
Below is my flume config files.
Is there a setting to tell flume to not insert line feeds?
#On Ubuntu Workstation
#list sources, sinks and channels in the agent
agent.sources = axon_source
agent.channels = memorychannel
agent.sinks = AvroOut
#define flow
agent.sources.axon_source.channels = memorychannel
agent.sinks.AvroOut.channel = memorychannel
agent.channels.memorychannel.type = memory
agent.channels.memorychannel.capacity = 100000
#source
agent.sources.axon_source.type = spooldir
agent.sources.axon_source.spoolDir = /home/ubuntu/workspace/logdump
agent.sources.axon_source.decodeErrorPolicy = ignore
#avro out
agent.sinks.AvroOut.type = avro
agent.sinks.AvroOut.hostname = 172.31.12.221
agent.sinks.AvroOut.port = 41415
agent.sinks.AvroOut.maxIoWorkers = 2
------------------------------------------------------------
#On Hadoop Server
agent.sources = AvroIn
agent.sources.AvroIn.type = avro
agent.sources.AvroIn.bind = 172.31.131.1
agent.sources.AvroIn.port = 41415
agent.sources.AvroIn.channels = MemChan1
agent.channels = MemChan1
agent.channels.MemChan1.type = memory
agent.channels.MemChan1.capacity = 100000
agent.sinks = HDFSSink
agent.sinks.HDFSSink.type = hdfs
agent.sinks.HDFSSink.channel = MemChan1
agent.sinks.HDFSSink.hdfs.path = /Logs/%Y%m/
agent.sinks.HDFSSink.hdfs.filePrefix = axoncapture
agent.sinks.HDFSSink.hdfs.fileSuffix = .log
agent.sinks.HDFSSink.hdfs.minBlockReplicas = 1
agent.sinks.HDFSSink.hdfs.rollCount = 0
agent.sinks.HDFSSink.hdfs.rollSize = 314572800
agent.sinks.HDFSSink.hdfs.writeFormat = Text
agent.sinks.HDFSSink.hdfs.fileType = DataStream
agent.sinks.HDFSSink.hdfs.useLocalTimeStamp = True
Found the answer to my question:
The default maxLineLength for the LINE deserializer is 2048:
http://archive.cloudera.com/cdh5/cdh/5/flume-ng/FlumeUserGuide.html#line
I added the line to my flume.conf file and fixed the problem:
agent.sources.axon_source.deserializer.maxLineLength=60000

Flume ElasticSearchSink does not consume all messages

I am using flume to process log lines to hdfs and log them into ElasticSearch using ElasticSearchSink.
Here is my configuration:
agent.channels.memory-channel.type = memory
agent.sources.tail-source.type = exec
agent.sources.tail-source.command = tail -4000 /home/cto/hs_err_pid11679.log
agent.sources.tail-source.channels = memory-channel
agent.sinks.log-sink.channel = memory-channel
agent.sinks.log-sink.type = logger
#####INTERCEPTORS
agent.sources.tail-source.interceptors = timestampInterceptor
agent.sources.tail-source.interceptors.timestampInterceptor.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
####SINK
# Setting the sink to HDFS
agent.sinks.hdfs-sink.channel = memory-channel
agent.sinks.hdfs-sink.type = hdfs
agent.sinks.hdfs-sink.hdfs.path = hdfs://localhost:8020/data/flume/%y-%m-%d/
agent.sinks.hdfs-sink.hdfs.fileType = DataStream
agent.sinks.hdfs-sink.hdfs.inUsePrefix =.
agent.sinks.hdfs-sink.hdfs.rollCount = 0
agent.sinks.hdfs-sink.hdfs.rollInterval = 0
agent.sinks.hdfs-sink.hdfs.rollSize = 10000000
agent.sinks.hdfs-sink.hdfs.idleTimeout = 10
agent.sinks.hdfs-sink.hdfs.writeFormat = Text
agent.sinks.elastic-sink.channel = memory-channel
agent.sinks.elastic-sink.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
agent.sinks.elastic-sink.hostNames = 127.0.0.1:9300
agent.sinks.elastic-sink.indexName = flume_index
agent.sinks.elastic-sink.indexType = logs_type
agent.sinks.elastic-sink.clusterName = elasticsearch
agent.sinks.elastic-sink.batchSize = 500
agent.sinks.elastic-sink.ttl = 5d
agent.sinks.elastic-sink.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchDynamicSerializer
# Finally, activate.
agent.channels = memory-channel
agent.sources = tail-source
agent.sinks = log-sink hdfs-sink elastic-sink
The problem is that I only see 1-2 messages in elastic using kibana and lots of messages in the hdfs files.
Any idea what I am missing here?
The problem is related to a bug in the Serializer.
if we drop the line:
agent.sinks.elastic-sink.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchDynamicSerializer
the messages are consumed with no problem.
The problem is with the way the #timestamp field is created when using the serializer.

flume loss data when collect online data to hdfs

I used flume-ng 1.5 version to collect logs.
There are two agents in the data flow and they are on two hosts, respectively.
And the data is sended from agent1 to agent2.
The agents's component is as follows:
agent1: spooling dir source --> file channel --> avro sink
agent2: avro source --> file channel --> hdfs sink
But it seems to loss data about 1/1000 percentage of million data.
To solve problem I tried these steps:
look up agents log: cannot find any error or exception.
look up agents monitor metrics: the events number that put and take from channel always equals
statistic the data number by hive query and hdfs file use shell, respectively: the two number is equal and less than the online data number
agent1's configuration:
#agent
agent1.sources = src_spooldir
agent1.channels = chan_file
agent1.sinks = sink_avro
#source
agent1.sources.src_spooldir.type = spooldir
agent1.sources.src_spooldir.spoolDir = /data/logs/flume-spooldir
agent1.sources.src_spooldir.interceptors=i1
#interceptors
agent1.sources.src_spooldir.interceptors.i1.type=regex_extractor
agent1.sources.src_spooldir.interceptors.i1.regex=(\\d{4}-\\d{2}-\\d{2}).*
agent1.sources.src_spooldir.interceptors.i1.serializers=s1
agent1.sources.src_spooldir.interceptors.i1.serializers.s1.name=dt
#sink
agent1.sinks.sink_avro.type = avro
agent1.sinks.sink_avro.hostname = 10.235.2.212
agent1.sinks.sink_avro.port = 9910
#channel
agent1.channels.chan_file.type = file
agent1.channels.chan_file.checkpointDir = /data/flume/agent1/checkpoint
agent1.channels.chan_file.dataDirs = /data/flume/agent1/data
agent1.sources.src_spooldir.channels = chan_file
agent1.sinks.sink_avro.channel = chan_file
agent2's configuration
# agent
agent2.sources = source1
agent2.channels = channel1
agent2.sinks = sink1
# source
agent2.sources.source1.type = avro
agent2.sources.source1.bind = 10.235.2.212
agent2.sources.source1.port = 9910
# sink
agent2.sinks.sink1.type= hdfs
agent2.sinks.sink1.hdfs.fileType = DataStream
agent2.sinks.sink1.hdfs.filePrefix = log
agent2.sinks.sink1.hdfs.path = hdfs://hnd.hadoop.jsh:8020/data/%{dt}
agent2.sinks.sink1.hdfs.rollInterval = 600
agent2.sinks.sink1.hdfs.rollSize = 0
agent2.sinks.sink1.hdfs.rollCount = 0
agent2.sinks.sink1.hdfs.idleTimeout = 300
agent2.sinks.sink1.hdfs.round = true
agent2.sinks.sink1.hdfs.roundValue = 10
agent2.sinks.sink1.hdfs.roundUnit = minute
# channel
agent2.channels.channel1.type = file
agent2.channels.channel1.checkpointDir = /data/flume/agent2/checkpoint
agent2.channels.channel1.dataDirs = /data/flume/agent2/data
agent2.sinks.sink1.channel = channel1
agent2.sources.source1.channels = channel1
Any suggestions are welcome!
there is a bug in file line deseriazer when encounter some specific character of utf which point is between U+10000 and U+10FFFF, they represent in utf16 by two 16-bit code unit called surrogate pairs.

Resources