cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Open Pipeline - processing logs from Azure

Good afternoon community,

struggling with OpenPipeline as I get confused by different settings in Dynatrace.

The original situation is the following:

  1. I "log forward" logs from Azure to DT via Dynatrace Azure Native Service - so far so good.
  2. When data come in Dynatrace there is a processing rule (created by DT): [Built-in] cloud:azure:common
  3. I create a bucket with a specific matcher to just pick some lines out of all the log - up until here everything is Vanilla
  4. Now: I want to create a pipeline that parse exactly the same lines to extract some additional field.
    1. here is my doubt: during the processing step set up, should I consider the log lines as I can query it in Notebook (with step 2 applied)  or Should I consider only the "content" field of each line before step 2 is applied? 

Already existing processing rule on logs gets me always confused with processing step in pipeline...
Thank you to whomever will clarify this,

regards

3 REPLIES 3

Julius_Loman
DynaMight Legend
DynaMight Legend

@y_buccellato If I got your situation right, the latter is true. OpenPipeline works on the ingestion. So you are working with raw data in the OpenPipeline as they are received.
There is the "Classic pipeline" which means the log processing rules and metric/event extractions you can find in the Settings -> Log Monitoring. So you will have to "replicate" some of the builtin processing rules in your pipeline - if they are needed in your case.

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Thank you - I got this intuition when processing step in pipeline was not applying (matcher wasn't matching azure.<anyfield>) but now it's good that you confirm this too 🙂

So potentially if I build a pipeline in front of existing DT config I could break the built-in rule existing for Azure logs?

Featured Posts