cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

BT Export - Buffer when Flume is unavailable

kristof_renders
Dynatrace Pro
Dynatrace Pro

Hi,

A customer was having issues with Flume which kept returning 503 when Dynatrace tried to send data there. At a certain time this issue was resolved and the data started flowing again.

Good was that all the missing data (a couple of days worth) also appeared correctly and it seemed that it was buffered.

Now my question: how big is that buffer when Flume is unavailable and can that be size be changed.

Thanks!

KR,
Kristof

2 REPLIES 2

andreas_grabner
Dynatrace Leader
Dynatrace Leader

Hi Kristof. If you havent received the answer to that question yet I propose to forward it to the product team in Linz. I will be off for the next weeks - so - maybe you reach out to them and see if you can get an answer and then post the answer on the forum

srikar_mohan333
Inactive

My understanding was that results of the BT are sent to flume in real time as and when its computed (i.e when BTs are real time analyzed)...which makes me think there is no special buffer for BT export data, at least nothing to hold that much data...could it be that data was held in the flume channel ? Is splunk the consumer of flume output in which case could it be that the output (csv or json fomat) was generated in flume but took a while for Splunk to process it? In the past I have seen issues where flume was undersized and it would cause stale (old) data to be written again and again...after experimenting with the sink/channel capacity settings we ended up with the below which seemed to work well (also consider increasing flumes heap size)

agent1.sinks.PurePathSink.sink.batchSize = 100

agent1.channels.PurePathChannel.capacity = 10000

Curious to see if you hear otherwise about the internal workings of this,

Thanks!