We are using flume and S3 to store our events.
I recognized that events are only transferred to S3 whenever the HDFS sink rolls to the next file or flume is shutdown gracefully.
This can, in my mind, lead to potential data loss. The Flume Documentation writes:
...Flume uses a transactional approach to guarantee the reliable
delivery of the Events...
here my configuration:
agent.sinks.defaultSink.type = HDFSEventSink
agent.sinks.defaultSink.hdfs.fileType = DataStream
agent.sinks.defaultSink.channel = fileChannel
agent.sinks.defaultSink.serializer = avro_event
agent.sinks.defaultSink.serializer.compressionCodec = snappy
agent.sinks.defaultSink.hdfs.path = s3n://testS3Bucket/%Y/%m/%d
agent.sinks.defaultSink.hdfs.filePrefix = events
agent.sinks.defaultSink.hdfs.rollInterval = 3600
agent.sinks.defaultSink.hdfs.rollCount = 0
agent.sinks.defaultSink.hdfs.rollSize = 262144000
agent.sinks.defaultSink.hdfs.batchSize = 10000
agent.sinks.defaultSink.hdfs.useLocalTimeStamp = true
#### CHANNELS ####
agent.channels.fileChannel.type = file
agent.channels.fileChannel.capacity = 1000000
agent.channels.fileChannel.transactionCapacity = 10000
I assume that I just do something wrong, any Ideas?
After some investigation I found one of the main problems using S3 with flume and the HDFS Sink.
One of the main differences between plain HDFS and the S3 implementation is that S3 does not directly support rename. When a file is renamed in S3 the file will be copied and to the new name and the old file will be deleted. (see: How to rename files and folder in Amazon S3?)
Flume by default extend files with .tmp when the file is not full. After the rotation the file will be renamed to the final filename. In HDFS this will be no problem but in S3 this can cause problems according to this issue:
https://issues.apache.org/jira/browse/FLUME-2445
Because S3 with HDFS sink seams not 100% trustworthy I prefer the more safe way of saving all files local and sync/delete the finished files with the aws tool s3 sync (http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
In worse case the files are not synced or the local disk is full but both problems can be easily solved via a monitoring system that anyways should be used.
Related
I'm investigating the performances of a Flink job that transports data from Kafka to an S3 Sink.
We are using a BucketingSink to write parquet files. The bucketing logic divides the messages having a folder per type of data, tenant (customer), date-time, extraction Id, etc etc. This results in each file is stored in a folder structure composed by 9-10 layers (s3_bucket:/1/2/3/4/5/6/7/8/9/myFile...)
If the data is distributed as bursts of messages for tenant-type we see good performances in writing, but when the data is more a white noise distribution on thousands of tenants, dozens of data types and multiple extraction IDs, we have an incredible loss of performances. (in the order of 300x times)
Attaching a debugger, it seems the issue is connected to the number of handlers open at the same time on S3 to write data. More specifically:
Researching in the hadoop libraries used to write to S3 I have found some possible improvements setting:
<name>fs.s3a.connection.maximum</name>
<name>fs.s3a.threads.max</name>
<name>fs.s3a.threads.core</name>
<name>fs.s3a.max.total.tasks</name>
But none of these made a big difference in throughput.
I also tried to flatten the folder structure to write to a single key like (1_2_3_...) but also this didn't bring any improvement.
Note: The tests have been done on Flink 1.8 with the Hadoop FileSystem (BucketingSink), writing to S3 using the hadoop fs libraries 2.6.x (as we use Cloudera CDH 5.x for savepoints), so we can't switch to StreamingFileSink.
After the suggestion from Kostas in https://lists.apache.org/thread.html/50ef4d26a1af408df8d9abb70589699cb6b26b2600ab6f4464e86ea4%40%3Cdev.flink.apache.org%3E
The culprit of the slow-down is this piece of code:
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L543-L551
This alone takes around 4-5 secs, with a total of 6 secs to open the file. Logs from an instrumented call:
2020-02-07 08:51:05,825 INFO BucketingSink - openNewPartFile FS verification
2020-02-07 08:51:09,906 INFO BucketingSink - openNewPartFile FS verification - done
2020-02-07 08:51:11,181 INFO BucketingSink - openNewPartFile FS - completed partPath = s3a://....
This together with the default setup of the bucketing sink with 60 secs inactivity rollover
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L195
means that with more than 10 parallel bucket on a slot by the time we finish creating the last bucket the first one became stale, so needs to be rotated generating a blocking situation.
We solved this by replacing the BucketingSink.java and deleting the FS check mentioned above:
LOG.debug("Opening new part file FS verification");
if (!fs.exists(bucketPath)) {
try {
if (fs.mkdirs(bucketPath)) {
LOG.debug("Created new bucket directory: {}", bucketPath);
}
}
catch (IOException e) {
throw new RuntimeException("Could not create new bucket path.", e);
}
}
LOG.debug("Opening new part file FS verification - done");
as we see that the sink works fine without it, now the file opening takes ~1.2sec.
Moreover we set the default inactive threshold to 5 mins. With this changes we can easily handle more than 200 buckets per slot (once the job takes speed it will ingest on all the slots so postponing the inactive timeout)
I am using Flume to store sensor data in HDFS. Once the data is received through MQTT. The subscriber posts the data in JSON format to Flume HTTP listener. It is currently working fine, but the problem is that flume is not writing to HDFS file till I stop it (or the size of the file reachs 128MB). I am using Hive to apply a schema on read. Unfortunately, the resulting hive table contains only 1 entry. This is normal because Flume did not write new coming data to file (loaded by Hive).
Is there any manner to force Flume to write new coming data to HDFS in a near-real time way? So, I don't need to restart it or to use small files?
here is my flume configuration:
# Name the components on this agent
emsFlumeAgent.sources = http_emsFlumeAgent
emsFlumeAgent.sinks = hdfs_sink
emsFlumeAgent.channels = channel_hdfs
# Describe/configure the source
emsFlumeAgent.sources.http_emsFlumeAgent.type = http
emsFlumeAgent.sources.http_emsFlumeAgent.bind = localhost
emsFlumeAgent.sources.http_emsFlumeAgent.port = 41414
# Describe the sink
emsFlumeAgent.sinks.hdfs_sink.type = hdfs
emsFlumeAgent.sinks.hdfs_sink.hdfs.path = hdfs://localhost:9000/EMS/%{sensor}
emsFlumeAgent.sinks.hdfs_sink.hdfs.rollInterval = 0
emsFlumeAgent.sinks.hdfs_sink.hdfs.rollSize = 134217728
emsFlumeAgent.sinks.hdfs_sink.hdfs.rollCount=0
#emsFlumeAgent.sinks.hdfs_sink.hdfs.idleTimeout=20
# Use a channel which buffers events in memory
emsFlumeAgent.channels.channel_hdfs.type = memory
emsFlumeAgent.channels.channel_hdfs.capacity = 10000
emsFlumeAgent.channels.channel_hdfs.transactionCapacity = 100
# Bind the source and sinks to the channel
emsFlumeAgent.sources.http_emsFlumeAgent.channels = channel_hdfs
emsFlumeAgent.sinks.hdfs_sink.channel = channel_hdfs
I think the tricky bit here is that you would like to write data to HDFS in near real time but don't want small files either (for obvious reasons) and this could be a difficult thing to a achieve.
You'll need to find optimal balance between the following two parameters:
hdfs.rollSize (Default = 1024) - File size to trigger roll, in bytes (0: never roll based on file size)
and
hdfs.batchSize (Default = 100) - Number of events written to file before it is flushed to HDFS
If your data is not likely to reach 128 MB in the preferred time duration, then you may need to reduce the rollSize but only to an extent that you don't run into the small files problem.
Since, you have not set any batch size in your HDFS sink, you should see the results of HDFS flush after every 100 records but once the size of the flushed records jointly reaches 128 MB, the contents would be rolled up in a 128 MB file. Is this also not happening? Could you please confirm?
Hope this helps!
I am trying to move my files in hdfs from local system using flume but when i am running my flume it is creating many small files. Size of my original file's are 154 - 500Kb but in my HDFS it is creating many files of size 4-5kb. I searched and got to know that changing the rollSize and rollCount will work i increased the values but still same issue is happening. Also i am getting below error.
Error:
ERROR hdfs.BucketWriter: Hit max consecutive under-replication
rotations (30); will not continue rolling files under this path due to
under-replication
As i am working in cluster i am a bit scared to do changes in the hdfs-site.xml. Please suggest me what i can do to either move original files in HDFS or make the small files more in size (instead of 4-5kb make it 50-60kb).
Below is my configuration.
Configuration:
agent1.sources = source1
agent1.sinks = sink1
agent1.channels = channel1
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /root/Downloads/CD/parsedCD
agent1.sources.source1.deletePolicy = immediate
agent1.sources.source1.basenameHeader = true
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path = /user/cloudera/flumecd
agent1.sinks.sink1.hdfs.fileType = DataStream
agent1.sinks.sink1.hdfs.filePrefix = %{basename}
agent1.sinks.sink1.hdfs.rollInterval = 0
agent1.sinks.sink1.hdfs.batchsize= 1000
agent1.sinks.sink1.hdfs.rollSize= 1000000
agent1.sinks.sink1.hdfs.rollCount= 0
agent1.channels.channel1.type = memory
agent1.channels.channel1.maxFileSize =900000000
I think the error you are posting is clear enough: the files you are creating are under-replicated (which means the blocks of the files you are creating, and which are distributed along the cluster, have less copies than the replication factor -usually 3-); and while that situation continues in time, no more rolls will be done (because each time you roll the file, a new under-replicated file is created, and the maximum allowed -30- has been reached).
I'll recommend you to check why files are under-replicated. Maybe this is because the cluster is running out of disk, or because the cluster was set up with the minimum number of nodes -i.e. 3 nodes- and one is down -i.e. only 2 datanodes are alive and the replication factor is set to 3-.
Other options (not recommended) would be to decrease the replication factor -even to 1-. Or increase the allowed number of under-replicated rolls (I don't know if such a thing is possible, and even it is possible, in the end you will experience again the same error).
I'm new to Flume and I was exploring options to roll over my HDFS files on hourly basis using Flume.
In my project Apache Flume will read the messages from Rabbit MQ and it will write it to HDFS.
hdfs.rollInterval - It closes the file based on the time interval when it got opened.
New file will be created only when Flume reads a message after the file got closed. This option is not solving our problem.
hdfs.path = /%y/%m/%d/%H - This option is working fine and it creates folder on hourly basis. But the problem is new folder will be created only when new message comes.
For example: Messages are coming till 11.59, the file will be in open state. Then the messages stop coming till 12.30. But, the file will still be in open state. After 12.30 new message comes. Then because of hdfs.path configuration, previous file will be closed and new file will be created in new folder.
Previous file cannot be used for computation till it is closed.
We need an option of closing the opened files on hourly basis perfectly. I'm wondering if there any options in flume for doing that.
hdfs.rollInterval is described as
Number of seconds to wait before rolling current file
So this line should cause the files to allocate for an hour at a time
hdfs.rollInterval = 3600
And I would additionally ignore file size and event count, so add these as well
hdfs.rollSize = 0
hdfs.rollCount = 0
hdfs.idleTimeout
Timeout after which inactive files get closed (0 = disable automatic closing of idle files)
For example, you can set this property to 180. The file will be opened
I am trying to write the flume events in Amaozn S3.The events written in S3 is in compressed format. My Flume configuration is given below. I am facing a data loss. Based on the configuration given below, if I publish 20000 events, I receive only 1000 events and all other data is lost. But When I disable the rollcount, rollSize and rollInterval configurations, all the events are received but there are 2000 small files created. Is there any wrong in my configuration settings? Should I add any other configurations?
injector.sinks.s3_3store.type = hdfs
injector.sinks.s3_3store.channel = disk_backed4
injector.sinks.s3_3store.hdfs.fileType = CompressedStream
injector.sinks.s3_3store.hdfs.codeC = gzip
injector.sinks.s3_3store.hdfs.serializer = TEXT
injector.sinks.s3_3store.hdfs.path = s3n://CID:SecretKey#bucketName/dth=%Y-%m-%d-%H
injector.sinks.s3_1store.hdfs.filePrefix = events-%{receiver}
# Roll when files reach 256M or after 10m, whichever comes first
injector.sinks.s3_3store.hdfs.rollCount = 0
injector.sinks.s3_3store.hdfs.idleTimeout = 600
injector.sinks.s3_3store.hdfs.rollSize = 268435456
#injector.sinks.s3_3store.hdfs.rollInterval = 3600
# Flush data to buckets every 1k events
injector.sinks.s3_3store.hdfs.batchSize = 10000
For starters: if you disable your setting for rollCount, rollSize and so on, flume will revert to defaults, hence the small files you receive, those are the default.
The relevant aspect is this:
injector.sinks.s3_3store.hdfs.batchSize = 10000
it basically tells your sink to collect 10.000 events before flushing. If you reduce that amount, you'll get smaller files too, because S3 in contrast to regular HDFS doesn't support file appends. Once you flush, the files will be closed and a new file will be created.
Try to determine which amount of events your sink will receive within a short time frame of a couple of minutes or so and set that value as you batch size.